Ria Patel, Sujit Tripathy, Zachary Sublett, Seoyoung An, Riya Patel
{"title":"Using CSNNs to Perform Event-based Data Processing & Classification on ASL-DVS","authors":"Ria Patel, Sujit Tripathy, Zachary Sublett, Seoyoung An, Riya Patel","doi":"arxiv-2408.00611","DOIUrl":null,"url":null,"abstract":"Recent advancements in bio-inspired visual sensing and neuromorphic computing\nhave led to the development of various highly efficient bio-inspired solutions\nwith real-world applications. One notable application integrates event-based\ncameras with spiking neural networks (SNNs) to process event-based sequences\nthat are asynchronous and sparse, making them difficult to handle. In this\nproject, we develop a convolutional spiking neural network (CSNN) architecture\nthat leverages convolutional operations and recurrent properties of a spiking\nneuron to learn the spatial and temporal relations in the ASL-DVS gesture\ndataset. The ASL-DVS gesture dataset is a neuromorphic dataset containing hand\ngestures when displaying 24 letters (A to Y, excluding J and Z due to the\nnature of their symbols) from the American Sign Language (ASL). We performed\nclassification on a pre-processed subset of the full ASL-DVS dataset to\nidentify letter signs and achieved 100\\% training accuracy. Specifically, this\nwas achieved by training in the Google Cloud compute platform while using a\nlearning rate of 0.0005, batch size of 25 (total of 20 batches), 200\niterations, and 10 epochs.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"82 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.00611","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advancements in bio-inspired visual sensing and neuromorphic computing
have led to the development of various highly efficient bio-inspired solutions
with real-world applications. One notable application integrates event-based
cameras with spiking neural networks (SNNs) to process event-based sequences
that are asynchronous and sparse, making them difficult to handle. In this
project, we develop a convolutional spiking neural network (CSNN) architecture
that leverages convolutional operations and recurrent properties of a spiking
neuron to learn the spatial and temporal relations in the ASL-DVS gesture
dataset. The ASL-DVS gesture dataset is a neuromorphic dataset containing hand
gestures when displaying 24 letters (A to Y, excluding J and Z due to the
nature of their symbols) from the American Sign Language (ASL). We performed
classification on a pre-processed subset of the full ASL-DVS dataset to
identify letter signs and achieved 100\% training accuracy. Specifically, this
was achieved by training in the Google Cloud compute platform while using a
learning rate of 0.0005, batch size of 25 (total of 20 batches), 200
iterations, and 10 epochs.