{"title":"FPGA-based bio-inspired architecture for multi-scale attentional vision","authors":"N. Cuperlier, F.J.Q. deMelo, Benoît Miramond","doi":"10.1109/DASIP.2016.7853828","DOIUrl":null,"url":null,"abstract":"Attention-based bio-inspired vision can be studied as a different way to consider sensor processing, firstly allowing to reduce the amount of data transmitted by connected cameras and secondly advocating a paradigm shift toward neuro-inspired processing for the post-processing of the few regions extracted from the visual field. The computational complexity of the corresponding vision models leads us to follow an in-sensor approach in the context of embedded systems. We propose in this paper an attention-based smart-camera which extracts salient features based on retina receptive fields at multiple scales and in real-time thanks to a dedicated hardware architecture. The results show that the entire visual chain can be embedded into a FPGA-SoC device delivering up to 60 frames per second. The features provided by the smart-camera can then be learned by external neural networks in order to accomplish various applications.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"23 1","pages":"231-232"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DASIP.2016.7853828","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Attention-based bio-inspired vision can be studied as a different way to consider sensor processing, firstly allowing to reduce the amount of data transmitted by connected cameras and secondly advocating a paradigm shift toward neuro-inspired processing for the post-processing of the few regions extracted from the visual field. The computational complexity of the corresponding vision models leads us to follow an in-sensor approach in the context of embedded systems. We propose in this paper an attention-based smart-camera which extracts salient features based on retina receptive fields at multiple scales and in real-time thanks to a dedicated hardware architecture. The results show that the entire visual chain can be embedded into a FPGA-SoC device delivering up to 60 frames per second. The features provided by the smart-camera can then be learned by external neural networks in order to accomplish various applications.