Understanding user scenario and behavior is essential for the development of human-centered intelligent service systems. However, the presence of cluttered objects, uncertain human behaviors, and overlapping timelines in daily life scenarios complicates the problem of scenario understanding. This paper aims to address the challenges of identifying and predicting user scenario and behavior sequences through a multimodal data fusion approach, focusing on the integration of visual and environmental data to capture subtle scenario and behavioral features.
For the purpose, a novel Vision-Context Fusion Scenario Recognition (VCFSR) approach was proposed, encompassing three stages. First, four categories of context data related to home scenarios were acquired: physical context, time context, user context, and inferred context. Second, scenarios were represented as multidimensional data relationships through modeling technologies. Third, a scenario recognition model was developed, comprising context feature processing, visual feature handling, and multimodal feature fusion. For illustration, a smart home environment was built, and twenty-six participants were recruited to perform various home activities. Integral sensors were used to collect environmental context data, and video data was captured simultaneously, both of which jointly form a multimodal dataset. Results demonstrated that the VCFSR model achieved an average accuracy of 98.1 %, outperforming traditional machine learning models such as decision trees and support vector machines. This method was then employed for fine-grained human behavior sequence prediction tasks, showing good performance in predicting behavior sequences across all scenarios constructed in this study. Furthermore, the results of ablation experiments revealed that the multimodal feature fusion method increased the average accuracy by at least 1.8 % compared to single-modality data-driven methods.
This novel approach to user behavior modeling simultaneously handles the relationship threads across scenarios and the rich details provided by visual data, paving the way for advanced intelligent services in complex interactive environments such as smart homes and hospitals.