基于变形网格模型和运动表示的结构密集位移识别Nodes2STRNet

IF 3.4 Q1 ENGINEERING, MECHANICAL 国际机械系统动力学学报(英文) Pub Date : 2023-09-14 DOI:10.1002/msd2.12083
Jin Zhao, Hui Li, Yang Xu
{"title":"基于变形网格模型和运动表示的结构密集位移识别Nodes2STRNet","authors":"Jin Zhao,&nbsp;Hui Li,&nbsp;Yang Xu","doi":"10.1002/msd2.12083","DOIUrl":null,"url":null,"abstract":"<p>Displacement is a critical indicator for mechanical systems and civil structures. Conventional vision-based displacement recognition methods mainly focus on the sparse identification of limited measurement points, and the motion representation of an entire structure is very challenging. This study proposes a novel Nodes2STRNet for structural dense displacement recognition using a handful of structural control nodes based on a deformable structural three-dimensional mesh model, which consists of control node estimation subnetwork (NodesEstimate) and pose parameter recognition subnetwork (Nodes2PoseNet). NodesEstimate calculates the dense optical flow field based on FlowNet 2.0 and generates structural control node coordinates. Nodes2PoseNet uses structural control node coordinates as input and regresses structural pose parameters by a multilayer perceptron. A self-supervised learning strategy is designed with a mean square error loss and <i>L</i><sub>2</sub> regularization to train Nodes2PoseNet. The effectiveness and accuracy of dense displacement recognition and robustness to light condition variations are validated by seismic shaking table tests of a four-story-building model. Comparative studies with image-segmentation-based Structure-PoseNet show that the proposed Nodes2STRNet can achieve higher accuracy and better robustness against light condition variations. In addition, NodesEstimate does not require retraining when faced with new scenarios, and Nodes2PoseNet has high self-supervised training efficiency with only a few control nodes instead of fully supervised pixel-level segmentation.</p>","PeriodicalId":60486,"journal":{"name":"国际机械系统动力学学报(英文)","volume":"3 3","pages":"229-250"},"PeriodicalIF":3.4000,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/msd2.12083","citationCount":"0","resultStr":"{\"title\":\"Nodes2STRNet for structural dense displacement recognition by deformable mesh model and motion representation\",\"authors\":\"Jin Zhao,&nbsp;Hui Li,&nbsp;Yang Xu\",\"doi\":\"10.1002/msd2.12083\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Displacement is a critical indicator for mechanical systems and civil structures. Conventional vision-based displacement recognition methods mainly focus on the sparse identification of limited measurement points, and the motion representation of an entire structure is very challenging. This study proposes a novel Nodes2STRNet for structural dense displacement recognition using a handful of structural control nodes based on a deformable structural three-dimensional mesh model, which consists of control node estimation subnetwork (NodesEstimate) and pose parameter recognition subnetwork (Nodes2PoseNet). NodesEstimate calculates the dense optical flow field based on FlowNet 2.0 and generates structural control node coordinates. Nodes2PoseNet uses structural control node coordinates as input and regresses structural pose parameters by a multilayer perceptron. A self-supervised learning strategy is designed with a mean square error loss and <i>L</i><sub>2</sub> regularization to train Nodes2PoseNet. The effectiveness and accuracy of dense displacement recognition and robustness to light condition variations are validated by seismic shaking table tests of a four-story-building model. Comparative studies with image-segmentation-based Structure-PoseNet show that the proposed Nodes2STRNet can achieve higher accuracy and better robustness against light condition variations. In addition, NodesEstimate does not require retraining when faced with new scenarios, and Nodes2PoseNet has high self-supervised training efficiency with only a few control nodes instead of fully supervised pixel-level segmentation.</p>\",\"PeriodicalId\":60486,\"journal\":{\"name\":\"国际机械系统动力学学报(英文)\",\"volume\":\"3 3\",\"pages\":\"229-250\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2023-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/msd2.12083\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"国际机械系统动力学学报(英文)\",\"FirstCategoryId\":\"1087\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/msd2.12083\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MECHANICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"国际机械系统动力学学报(英文)","FirstCategoryId":"1087","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/msd2.12083","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MECHANICAL","Score":null,"Total":0}
引用次数: 0

摘要

位移是机械系统和土木结构的一个关键指标。传统的基于视觉的位移识别方法主要关注有限测量点的稀疏识别,而整个结构的运动表示非常具有挑战性。本研究基于可变形结构三维网格模型,提出了一种新的用于结构密集位移识别的Nodes2STRNet,该网络使用少数结构控制节点,由控制节点估计子网络(NodesEstimate)和位姿参数识别子网络(Nodes2PoseNet)组成。NodesEstimate基于FlowNet 2.0计算密集光流场,并生成结构控制节点坐标。Nodes2PoseNet使用结构控制节点坐标作为输入,并通过多层感知器回归结构姿态参数。设计了一种具有均方误差损失和L2正则化的自监督学习策略来训练Nodes2PoseNet。通过四层建筑模型的地震振动台试验,验证了密集位移识别的有效性和准确性以及对光照条件变化的鲁棒性。与基于图像分割的Structure PoseNet的比较研究表明,所提出的Nodes2STRNet可以实现更高的精度和更好的抗光照条件变化的鲁棒性。此外,NodesEstimate在面对新场景时不需要重新训练,并且Nodes2PoseNet具有很高的自监督训练效率,只有几个控制节点,而不是完全监督的像素级分割。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Nodes2STRNet for structural dense displacement recognition by deformable mesh model and motion representation

Displacement is a critical indicator for mechanical systems and civil structures. Conventional vision-based displacement recognition methods mainly focus on the sparse identification of limited measurement points, and the motion representation of an entire structure is very challenging. This study proposes a novel Nodes2STRNet for structural dense displacement recognition using a handful of structural control nodes based on a deformable structural three-dimensional mesh model, which consists of control node estimation subnetwork (NodesEstimate) and pose parameter recognition subnetwork (Nodes2PoseNet). NodesEstimate calculates the dense optical flow field based on FlowNet 2.0 and generates structural control node coordinates. Nodes2PoseNet uses structural control node coordinates as input and regresses structural pose parameters by a multilayer perceptron. A self-supervised learning strategy is designed with a mean square error loss and L2 regularization to train Nodes2PoseNet. The effectiveness and accuracy of dense displacement recognition and robustness to light condition variations are validated by seismic shaking table tests of a four-story-building model. Comparative studies with image-segmentation-based Structure-PoseNet show that the proposed Nodes2STRNet can achieve higher accuracy and better robustness against light condition variations. In addition, NodesEstimate does not require retraining when faced with new scenarios, and Nodes2PoseNet has high self-supervised training efficiency with only a few control nodes instead of fully supervised pixel-level segmentation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.50
自引率
0.00%
发文量
0
期刊最新文献
Issue Information Cover Image, Volume 4, Number 3, September 2024 Design of bionic water jet thruster with double-chamber driven by electromagnetic force A data-assisted physics-informed neural network (DA-PINN) for fretting fatigue lifetime prediction Comparison of the performance and dynamics of the asymmetric single-sided and symmetric double-sided vibro-impact nonlinear energy sinks with optimized designs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1