{"title":"Learning Dendritic-Neuron-Based Motion Detection for RGB Images: A Biomimetic Approach.","authors":"Tianqi Chen, Yuki Todo, Zhiyu Qiu, Yuxiao Hua, Delai Qiu, Xugang Wang, Zheng Tang","doi":"10.3390/biomimetics10010011","DOIUrl":null,"url":null,"abstract":"<p><p>In this study, we designed a biomimetic artificial visual system (AVS) inspired by biological visual system that can process RGB images. Our approach begins by mimicking the photoreceptor cone cells to simulate the initial input processing followed by a learnable dendritic neuron model to replicate ganglion cells that integrate outputs from bipolar and horizontal cell simulations. To handle multi-channel integration, we utilize a nonlearnable dendritic neuron model to simulate the lateral geniculate nucleus (LGN), which consolidates outputs across color channels, an essential function in biological multi-channel processing. Cross-validation experiments show that AVS demonstrates strong generalization across varied object-background configurations, achieving accuracy where traditional models like EfN-B0, ResNet50, and ConvNeXt typically fall short. Additionally, our results across different training-to-testing data ratios reveal that AVS maintains over 96% test accuracy even with limited training data, underscoring its robustness in low-data scenarios. This demonstrates the practical advantage of the AVS model in applications where large-scale annotated datasets are unavailable or expensive to curate. This AVS model not only advances biologically inspired multi-channel processing but also provides a practical framework for efficient, integrated visual processing in computational models.</p>","PeriodicalId":8907,"journal":{"name":"Biomimetics","volume":"10 1","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11763055/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomimetics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/biomimetics10010011","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
In this study, we designed a biomimetic artificial visual system (AVS) inspired by biological visual system that can process RGB images. Our approach begins by mimicking the photoreceptor cone cells to simulate the initial input processing followed by a learnable dendritic neuron model to replicate ganglion cells that integrate outputs from bipolar and horizontal cell simulations. To handle multi-channel integration, we utilize a nonlearnable dendritic neuron model to simulate the lateral geniculate nucleus (LGN), which consolidates outputs across color channels, an essential function in biological multi-channel processing. Cross-validation experiments show that AVS demonstrates strong generalization across varied object-background configurations, achieving accuracy where traditional models like EfN-B0, ResNet50, and ConvNeXt typically fall short. Additionally, our results across different training-to-testing data ratios reveal that AVS maintains over 96% test accuracy even with limited training data, underscoring its robustness in low-data scenarios. This demonstrates the practical advantage of the AVS model in applications where large-scale annotated datasets are unavailable or expensive to curate. This AVS model not only advances biologically inspired multi-channel processing but also provides a practical framework for efficient, integrated visual processing in computational models.