Bradley Scott, Edward Chadwick, Mhairi McInnes, Dimitra Blana
{"title":"Assessing single camera markerless motion capture during upper limb activities of daily living","authors":"Bradley Scott, Edward Chadwick, Mhairi McInnes, Dimitra Blana","doi":"10.1016/j.gaitpost.2023.07.222","DOIUrl":null,"url":null,"abstract":"In a recent scoping review (Scott et al., 2022) we discussed how single camera markerless motion capture (SCMoCap) may help to facilitate motion analysis in situations where it would otherwise not be possible, such as at-home rehabilitation for children with cerebral palsy (Kidziński et al., 2020), and more frequent data collection. However, few studies reported error of measurement in a clinically interpretable manner and there is little evidence assessing SCMoCap during upper limb activities of daily living. Presenting a comprehensive validation of SCMoCap, alongside clinically meaningful evaluation of results would be invaluable for clinicians and future researchers who are interested in implementing upper limb movement analysis into clinical practice (Philp et al., 2021). Are state-of-the-art single camera markerless motion capture methods suitable for measuring joint angles during a typical upper-limb functional assessment? Study participants were instructed to perform a compressive collection of physiological and functional movements that are typically part of an upper limb functional assessment. Movements were repeated 3 times for both the frontal and sagittal planes. Movements were recorded simultaneously with a 10-camera OptiTrack Prime 13 W marker-based motion capture setup (NaturalPoint, USA) and Azure Kinect camera (Microsoft, USA). An eSync2 synchronization device (NaturalPoint, USA) was used to avoid exposure interference between systems. Marker-based bony landmarks and joint centers were collected as recommended by the International Society of Biomechanics (Wu et al., 2005). Marker-based trajectories were processed using MotionMonitor xGen (Innovative Sports Training, USA), where a 20 Hz lowpass Butterworth filter was applied to marker positions. Markerless joint center positions were calculated using Azure Kinect body tracking. Markerless positions were filtered using a 10 Hz lowpass Butterworth filter, then upsampled to 120 Hz matching the OptiTrack recording frequency. Signals were time synchronized using cross correlation. Joint angles were calculated by solving inverse kinematics in OpenSim using Hamner’s model (Hamner, Seth & Delp, 2010). Here we present preliminary results of elbow flexion agreement from one participant during a cup drinking task (see figure1). The agreement between markerless and marker-based methods was evaluated in RStudio using, Bland-Altman analysis (mean difference = -7.49 °, upper limits of agreement 20.87 °, lower limits of agreement -35.85 °); intra-class correlation coefficient (ICC = 0.91 °); and root mean squared error (RMSE = 16.30 °). Fig. 1: Elbow flexion angle during a cup drinking taskDownload : Download high-res image (95KB)Download : Download full-size image Our preliminary results suggest good agreement between markerless and marker-based motion capture for elbow flexion while performing a cup drinking task. The Kinect underestimates joint angles at local maxima and minima (see Fig. 1), as represented by a mean difference of -7.49°. The marker positions returned by Azure Kinect body tracking are also subject to sudden changes at extremes of motion that do not represent the movement.","PeriodicalId":94018,"journal":{"name":"Gait & posture","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Gait & posture","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.gaitpost.2023.07.222","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In a recent scoping review (Scott et al., 2022) we discussed how single camera markerless motion capture (SCMoCap) may help to facilitate motion analysis in situations where it would otherwise not be possible, such as at-home rehabilitation for children with cerebral palsy (Kidziński et al., 2020), and more frequent data collection. However, few studies reported error of measurement in a clinically interpretable manner and there is little evidence assessing SCMoCap during upper limb activities of daily living. Presenting a comprehensive validation of SCMoCap, alongside clinically meaningful evaluation of results would be invaluable for clinicians and future researchers who are interested in implementing upper limb movement analysis into clinical practice (Philp et al., 2021). Are state-of-the-art single camera markerless motion capture methods suitable for measuring joint angles during a typical upper-limb functional assessment? Study participants were instructed to perform a compressive collection of physiological and functional movements that are typically part of an upper limb functional assessment. Movements were repeated 3 times for both the frontal and sagittal planes. Movements were recorded simultaneously with a 10-camera OptiTrack Prime 13 W marker-based motion capture setup (NaturalPoint, USA) and Azure Kinect camera (Microsoft, USA). An eSync2 synchronization device (NaturalPoint, USA) was used to avoid exposure interference between systems. Marker-based bony landmarks and joint centers were collected as recommended by the International Society of Biomechanics (Wu et al., 2005). Marker-based trajectories were processed using MotionMonitor xGen (Innovative Sports Training, USA), where a 20 Hz lowpass Butterworth filter was applied to marker positions. Markerless joint center positions were calculated using Azure Kinect body tracking. Markerless positions were filtered using a 10 Hz lowpass Butterworth filter, then upsampled to 120 Hz matching the OptiTrack recording frequency. Signals were time synchronized using cross correlation. Joint angles were calculated by solving inverse kinematics in OpenSim using Hamner’s model (Hamner, Seth & Delp, 2010). Here we present preliminary results of elbow flexion agreement from one participant during a cup drinking task (see figure1). The agreement between markerless and marker-based methods was evaluated in RStudio using, Bland-Altman analysis (mean difference = -7.49 °, upper limits of agreement 20.87 °, lower limits of agreement -35.85 °); intra-class correlation coefficient (ICC = 0.91 °); and root mean squared error (RMSE = 16.30 °). Fig. 1: Elbow flexion angle during a cup drinking taskDownload : Download high-res image (95KB)Download : Download full-size image Our preliminary results suggest good agreement between markerless and marker-based motion capture for elbow flexion while performing a cup drinking task. The Kinect underestimates joint angles at local maxima and minima (see Fig. 1), as represented by a mean difference of -7.49°. The marker positions returned by Azure Kinect body tracking are also subject to sudden changes at extremes of motion that do not represent the movement.