{"title":"Binocular-Separated Modeling for Efficient Binocular Stereo Matching","authors":"Yeping Peng;Jianrui Xu;Guangzhong Cao;Runhao Zeng","doi":"10.1109/TITS.2025.3531115","DOIUrl":null,"url":null,"abstract":"Binocular stereo matching is a crucial task in autonomous driving for accurately estimating the depth information of objects and scenes. This task, however, is challenging due to various ill-posed regions within binocular image pairs, such as repeated textures and weak textures which present complex correspondences between the points. Existing methods extract features from binocular input images mainly by relying on deep convolutional neural networks with a substantial number of convolutional layers, which may incur high memory and computation costs, thus making it hard to deploy in real-world applications. Additionally, previous methods do not consider the correlation between view unary features during the construction of the cost volume, thus leading to inferior results. To address these issues, a novel lightweight binocular-separated feature extraction module is proposed that includes a view-shared multi-dilation fusion module and a view-specific feature extractor. Our method leverages a shallow neural network with a multi-dilation modeling module to provide similar receptive fields as deep neural networks but with fewer parameters and better computational efficiency. Furthermore, we propose incorporating the correlations of view-shared features to dynamically select view-specific features during the construction of the cost volume. Extensive experiments conducted on two public benchmark datasets show that our proposed method outperforms the deep model-based baseline method (i.e., 13.6% improvement on Scene Flow and 2.0% on KITTI 2015) while using 29.7% fewer parameters. Ablation experiments show that our method achieves superior matching performance in weak texture and edge regions. The source code will be made publicly available.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 3","pages":"3028-3038"},"PeriodicalIF":7.9000,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Transportation Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10870872/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, CIVIL","Score":null,"Total":0}
引用次数: 0
Abstract
Binocular stereo matching is a crucial task in autonomous driving for accurately estimating the depth information of objects and scenes. This task, however, is challenging due to various ill-posed regions within binocular image pairs, such as repeated textures and weak textures which present complex correspondences between the points. Existing methods extract features from binocular input images mainly by relying on deep convolutional neural networks with a substantial number of convolutional layers, which may incur high memory and computation costs, thus making it hard to deploy in real-world applications. Additionally, previous methods do not consider the correlation between view unary features during the construction of the cost volume, thus leading to inferior results. To address these issues, a novel lightweight binocular-separated feature extraction module is proposed that includes a view-shared multi-dilation fusion module and a view-specific feature extractor. Our method leverages a shallow neural network with a multi-dilation modeling module to provide similar receptive fields as deep neural networks but with fewer parameters and better computational efficiency. Furthermore, we propose incorporating the correlations of view-shared features to dynamically select view-specific features during the construction of the cost volume. Extensive experiments conducted on two public benchmark datasets show that our proposed method outperforms the deep model-based baseline method (i.e., 13.6% improvement on Scene Flow and 2.0% on KITTI 2015) while using 29.7% fewer parameters. Ablation experiments show that our method achieves superior matching performance in weak texture and edge regions. The source code will be made publicly available.
期刊介绍:
The theoretical, experimental and operational aspects of electrical and electronics engineering and information technologies as applied to Intelligent Transportation Systems (ITS). Intelligent Transportation Systems are defined as those systems utilizing synergistic technologies and systems engineering concepts to develop and improve transportation systems of all kinds. The scope of this interdisciplinary activity includes the promotion, consolidation and coordination of ITS technical activities among IEEE entities, and providing a focus for cooperative activities, both internally and externally.