FARZAM TAJDARI;TOON HUYSMANS;XINHE YAO;JUN XU;MARYAM ZEBARJADI;YU SONG
{"title":"4D Feet: Registering Walking Foot Shapes Using Attention Enhanced Dynamic-Synchronized Graph Convolutional LSTM Network","authors":"FARZAM TAJDARI;TOON HUYSMANS;XINHE YAO;JUN XU;MARYAM ZEBARJADI;YU SONG","doi":"10.1109/OJCS.2024.3406645","DOIUrl":null,"url":null,"abstract":"4D-scans of dynamic deformable human body parts help researchers have a better understanding of spatiotemporal features. However, reconstructing 4D-scans utilizing multiple asynchronous cameras encounters two main challenges: 1) finding dynamic correspondences among different frames captured by each camera at the timestamps of the camera in terms of dynamic feature recognition, and 2) reconstructing 3D-shapes from the combined point clouds captured by different cameras at asynchronous timestamps in terms of multi-view fusion. Here, we introduce a generic framework able to 1) find and align dynamic features in the 3D-scans captured by each camera using the nonrigid-iterative-closest-farthest-points algorithm; 2) synchronize scans captured by asynchronous cameras through a novel ADGC-LSTM-based-network capable of aligning 3D-scans captured by different cameras to the timeline of a specific camera; and 3) register a high-quality template to synchronized scans at each timestamp to form a high-quality 3D-mesh model using a non-rigid registration method. With a newly developed 4D-foot-scanner, we validate the framework and create the first open-access data-set, namely the 4D-feet. It includes 4D-shapes (15 fps) of the right and left feet of 58 participants (116 feet including 5147 3D-frames), covering significant phases of the gait cycle. The results demonstrate the effectiveness of the proposed framework, especially in synchronizing asynchronous 4D-scans.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"5 ","pages":"343-355"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10541055","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10541055/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
4D-scans of dynamic deformable human body parts help researchers have a better understanding of spatiotemporal features. However, reconstructing 4D-scans utilizing multiple asynchronous cameras encounters two main challenges: 1) finding dynamic correspondences among different frames captured by each camera at the timestamps of the camera in terms of dynamic feature recognition, and 2) reconstructing 3D-shapes from the combined point clouds captured by different cameras at asynchronous timestamps in terms of multi-view fusion. Here, we introduce a generic framework able to 1) find and align dynamic features in the 3D-scans captured by each camera using the nonrigid-iterative-closest-farthest-points algorithm; 2) synchronize scans captured by asynchronous cameras through a novel ADGC-LSTM-based-network capable of aligning 3D-scans captured by different cameras to the timeline of a specific camera; and 3) register a high-quality template to synchronized scans at each timestamp to form a high-quality 3D-mesh model using a non-rigid registration method. With a newly developed 4D-foot-scanner, we validate the framework and create the first open-access data-set, namely the 4D-feet. It includes 4D-shapes (15 fps) of the right and left feet of 58 participants (116 feet including 5147 3D-frames), covering significant phases of the gait cycle. The results demonstrate the effectiveness of the proposed framework, especially in synchronizing asynchronous 4D-scans.