A novel user scenario and behavior sequence recognition approach based on vision-context fusion architecture

IF 9.9 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Advanced Engineering Informatics Pub Date : 2025-05-01 Epub Date: 2025-02-06 DOI:10.1016/j.aei.2025.103161
Wenyu Yuan, Danni Chang, Chenlu Mao, Luyao Wang, Ke Ren, Ting Han
{"title":"A novel user scenario and behavior sequence recognition approach based on vision-context fusion architecture","authors":"Wenyu Yuan,&nbsp;Danni Chang,&nbsp;Chenlu Mao,&nbsp;Luyao Wang,&nbsp;Ke Ren,&nbsp;Ting Han","doi":"10.1016/j.aei.2025.103161","DOIUrl":null,"url":null,"abstract":"<div><div>Understanding user scenario and behavior is essential for the development of human-centered intelligent service systems. However, the presence of cluttered objects, uncertain human behaviors, and overlapping timelines in daily life scenarios complicates the problem of scenario understanding. This paper aims to address the challenges of identifying and predicting user scenario and behavior sequences through a multimodal data fusion approach, focusing on the integration of visual and environmental data to capture subtle scenario and behavioral features.</div><div>For the purpose, a novel Vision-Context Fusion Scenario Recognition (VCFSR) approach was proposed, encompassing three stages. First, four categories of context data related to home scenarios were acquired: physical context, time context, user context, and inferred context. Second, scenarios were represented as multidimensional data relationships through modeling technologies. Third, a scenario recognition model was developed, comprising context feature processing, visual feature handling, and multimodal feature fusion. For illustration, a smart home environment was built, and twenty-six participants were recruited to perform various home activities. Integral sensors were used to collect environmental context data, and video data was captured simultaneously, both of which jointly form a multimodal dataset. Results demonstrated that the VCFSR model achieved an average accuracy of 98.1 %, outperforming traditional machine learning models such as decision trees and support vector machines. This method was then employed for fine-grained human behavior sequence prediction tasks, showing good performance in predicting behavior sequences across all scenarios constructed in this study. Furthermore, the results of ablation experiments revealed that the multimodal feature fusion method increased the average accuracy by at least 1.8 % compared to single-modality data-driven methods.</div><div>This novel approach to user behavior modeling simultaneously handles the relationship threads across scenarios and the rich details provided by visual data, paving the way for advanced intelligent services in complex interactive environments such as smart homes and hospitals.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103161"},"PeriodicalIF":9.9000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034625000540","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/6 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Understanding user scenario and behavior is essential for the development of human-centered intelligent service systems. However, the presence of cluttered objects, uncertain human behaviors, and overlapping timelines in daily life scenarios complicates the problem of scenario understanding. This paper aims to address the challenges of identifying and predicting user scenario and behavior sequences through a multimodal data fusion approach, focusing on the integration of visual and environmental data to capture subtle scenario and behavioral features.
For the purpose, a novel Vision-Context Fusion Scenario Recognition (VCFSR) approach was proposed, encompassing three stages. First, four categories of context data related to home scenarios were acquired: physical context, time context, user context, and inferred context. Second, scenarios were represented as multidimensional data relationships through modeling technologies. Third, a scenario recognition model was developed, comprising context feature processing, visual feature handling, and multimodal feature fusion. For illustration, a smart home environment was built, and twenty-six participants were recruited to perform various home activities. Integral sensors were used to collect environmental context data, and video data was captured simultaneously, both of which jointly form a multimodal dataset. Results demonstrated that the VCFSR model achieved an average accuracy of 98.1 %, outperforming traditional machine learning models such as decision trees and support vector machines. This method was then employed for fine-grained human behavior sequence prediction tasks, showing good performance in predicting behavior sequences across all scenarios constructed in this study. Furthermore, the results of ablation experiments revealed that the multimodal feature fusion method increased the average accuracy by at least 1.8 % compared to single-modality data-driven methods.
This novel approach to user behavior modeling simultaneously handles the relationship threads across scenarios and the rich details provided by visual data, paving the way for advanced intelligent services in complex interactive environments such as smart homes and hospitals.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种基于视觉-上下文融合架构的用户场景和行为序列识别方法
理解用户场景和行为对于开发以人为本的智能服务系统至关重要。然而,在日常生活场景中,杂乱的物体、不确定的人类行为和重叠的时间线的存在使场景理解问题复杂化。本文旨在通过多模态数据融合方法解决识别和预测用户场景和行为序列的挑战,重点是视觉和环境数据的集成,以捕捉微妙的场景和行为特征。为此,提出了一种新的视觉-上下文融合场景识别方法,该方法分为三个阶段。首先,获取与家庭场景相关的四类上下文数据:物理上下文、时间上下文、用户上下文和推断上下文。其次,通过建模技术将场景表示为多维数据关系。第三,建立了场景识别模型,包括上下文特征处理、视觉特征处理和多模态特征融合。举例来说,建立了一个智能家居环境,并招募了26名参与者进行各种家庭活动。利用集成传感器采集环境上下文数据,同时采集视频数据,两者共同构成多模态数据集。结果表明,VCFSR模型的平均准确率为98.1%,优于传统的机器学习模型,如决策树和支持向量机。然后将该方法用于细粒度的人类行为序列预测任务,在本研究构建的所有场景中都显示出良好的预测行为序列的性能。此外,烧蚀实验结果表明,与单模态数据驱动方法相比,多模态特征融合方法的平均精度提高了至少1.8%。这种新颖的用户行为建模方法同时处理了跨场景的关系线程和视觉数据提供的丰富细节,为智能家居和医院等复杂交互环境中的高级智能服务铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Advanced Engineering Informatics
Advanced Engineering Informatics 工程技术-工程:综合
CiteScore
12.40
自引率
18.20%
发文量
292
审稿时长
45 days
期刊介绍: Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.
期刊最新文献
Automated generation of assembly schedules for precast building projects under uncertainty using reinforcement learning and Monte Carlo sampling Continual health prognosis of machines via hypergraph topology-aware knowledge preserving and replay Application of GAN-based data augmentation and filtering methods for imbalanced grinding wheel specification classification A physics-informed and stochastic KAN framework for car-following behavior modeling of human-driven vehicles in mixed traffic flow Singularity-free prescribed performance control of a quadrotor UAV for precision agriculture
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1