Hongzhi You, Yijun Cao, Wei Yuan, Fanjun Wang, Ning Qiao, Yongjie Li
{"title":"Vector-Symbolic Architecture for Event-Based Optical Flow","authors":"Hongzhi You, Yijun Cao, Wei Yuan, Fanjun Wang, Ning Qiao, Yongjie Li","doi":"arxiv-2405.08300","DOIUrl":null,"url":null,"abstract":"From a perspective of feature matching, optical flow estimation for event\ncameras involves identifying event correspondences by comparing feature\nsimilarity across accompanying event frames. In this work, we introduces an\neffective and robust high-dimensional (HD) feature descriptor for event frames,\nutilizing Vector Symbolic Architectures (VSA). The topological similarity among\nneighboring variables within VSA contributes to the enhanced representation\nsimilarity of feature descriptors for flow-matching points, while its\nstructured symbolic representation capacity facilitates feature fusion from\nboth event polarities and multiple spatial scales. Based on this HD feature\ndescriptor, we propose a novel feature matching framework for event-based\noptical flow, encompassing both model-based (VSA-Flow) and self-supervised\nlearning (VSA-SM) methods. In VSA-Flow, accurate optical flow estimation\nvalidates the effectiveness of HD feature descriptors. In VSA-SM, a novel\nsimilarity maximization method based on the HD feature descriptor is proposed\nto learn optical flow in a self-supervised way from events alone, eliminating\nthe need for auxiliary grayscale images. Evaluation results demonstrate that\nour VSA-based method achieves superior accuracy in comparison to both\nmodel-based and self-supervised learning methods on the DSEC benchmark, while\nremains competitive among both methods on the MVSEC benchmark. This\ncontribution marks a significant advancement in event-based optical flow within\nthe feature matching methodology.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"49 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Symbolic Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.08300","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
From a perspective of feature matching, optical flow estimation for event
cameras involves identifying event correspondences by comparing feature
similarity across accompanying event frames. In this work, we introduces an
effective and robust high-dimensional (HD) feature descriptor for event frames,
utilizing Vector Symbolic Architectures (VSA). The topological similarity among
neighboring variables within VSA contributes to the enhanced representation
similarity of feature descriptors for flow-matching points, while its
structured symbolic representation capacity facilitates feature fusion from
both event polarities and multiple spatial scales. Based on this HD feature
descriptor, we propose a novel feature matching framework for event-based
optical flow, encompassing both model-based (VSA-Flow) and self-supervised
learning (VSA-SM) methods. In VSA-Flow, accurate optical flow estimation
validates the effectiveness of HD feature descriptors. In VSA-SM, a novel
similarity maximization method based on the HD feature descriptor is proposed
to learn optical flow in a self-supervised way from events alone, eliminating
the need for auxiliary grayscale images. Evaluation results demonstrate that
our VSA-based method achieves superior accuracy in comparison to both
model-based and self-supervised learning methods on the DSEC benchmark, while
remains competitive among both methods on the MVSEC benchmark. This
contribution marks a significant advancement in event-based optical flow within
the feature matching methodology.