Xiaohan Zhu, Ran Bu, Zhen Li, Fan Xu, Hesheng Wang
{"title":"A three-dimensional force estimation method for the cable-driven soft robot based on monocular images","authors":"Xiaohan Zhu, Ran Bu, Zhen Li, Fan Xu, Hesheng Wang","doi":"arxiv-2409.08033","DOIUrl":null,"url":null,"abstract":"Soft manipulators are known for their superiority in coping with\nhigh-safety-demanding interaction tasks, e.g., robot-assisted surgeries,\nelderly caring, etc. Yet the challenges residing in real-time contact feedback\nhave hindered further applications in precise manipulation. This paper proposes\nan end-to-end network to estimate the 3D contact force of the soft robot, with\nthe aim of enhancing its capabilities in interactive tasks. The presented\nmethod features directly utilizing monocular images fused with multidimensional\nactuation information as the network inputs. This approach simplifies the\npreprocessing of raw data compared to related studies that utilize 3D shape\ninformation for network inputs, consequently reducing configuration\nreconstruction errors. The unified feature representation module is devised to\nelevate low-dimensional features from the system's actuation signals to the\nsame level as image features, facilitating smoother integration of multimodal\ninformation. The proposed method has been experimentally validated in the soft\nrobot testbed, achieving satisfying accuracy in 3D force estimation (with a\nmean relative error of 0.84% compared to the best-reported result of 2.2% in\nthe related works).","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Soft manipulators are known for their superiority in coping with
high-safety-demanding interaction tasks, e.g., robot-assisted surgeries,
elderly caring, etc. Yet the challenges residing in real-time contact feedback
have hindered further applications in precise manipulation. This paper proposes
an end-to-end network to estimate the 3D contact force of the soft robot, with
the aim of enhancing its capabilities in interactive tasks. The presented
method features directly utilizing monocular images fused with multidimensional
actuation information as the network inputs. This approach simplifies the
preprocessing of raw data compared to related studies that utilize 3D shape
information for network inputs, consequently reducing configuration
reconstruction errors. The unified feature representation module is devised to
elevate low-dimensional features from the system's actuation signals to the
same level as image features, facilitating smoother integration of multimodal
information. The proposed method has been experimentally validated in the soft
robot testbed, achieving satisfying accuracy in 3D force estimation (with a
mean relative error of 0.84% compared to the best-reported result of 2.2% in
the related works).