Objective
Improper spinal posture during activities of daily living such as seated posture, upright stance, and ambulation, particularly under load-bearing conditions, has been recognized as a major contributor to musculoskeletal disorders, including chronic back pain and disc degeneration. This study presents a multimodal posture estimation and feedback framework that integrates wearable-sensor data and computer-vision analysis to support spinal health.
Methods
The system integrates data from Inertial Measurement Units (IMUs) and flex sensors to quantify postural angles which acts as sensor data, while concurrently extracting key visual features from multi-view (frontal and lateral) video recordings and photographs using the MediaPipe framework. A control group was formed under the guidance of physiotherapist and the data was collected at Tagore College of Physiotherapy located in Chennai, India. This control group comprises of 40 subjects, selected without gender bias, aged between 19 and 22 years. The analysis on multimodal data incorporated specifically, logistic regression (LR), decision tree (DT), random forest (RF), KNN and SVM.
Results
Among all evaluated models, the RF algorithm, showed effective performance and balance in all activities of both male and female subjects. The dataset used was a composite collection of both genders, resulting in accuracy rates of 75 %, 95 % and 63 % for sitting, standing, and walking, respectively.
Conclusion
The findings highlight that integrating wearable and visual modalities enhances posture classification accuracy. While the findings are preliminary, they establish a methodological foundation for future development of multimodal, feedback-based posture assessment systems.
扫码关注我们
求助内容:
应助结果提醒方式:
