{"title":"An Indian Currency Recognition Model for Assisting Visually Impaired Individuals","authors":"Madhav Pasumarthy, Rutvi Padhy, Raghuveer Yadav, Ganesh Subramaniam, Madhav Rao","doi":"10.1109/RASSE54974.2022.9989624","DOIUrl":null,"url":null,"abstract":"Visually impaired persons find it extremely difficult to perform cash transactions in outdoor environments. For assisting the visually challenged individuals, a YOLOv5 based deep neural network was designed to detect image based currency denominations. Thereby aid in completing the authentic transaction. The robust model was trained for images with currency notes in different backgrounds, multiple sides of the currency notes presented, notes around cluttered objects, notes near reflective surfaces, and blurred images of the currency notes. An annotated and augmented dataset of around 10,000 original images was created for developing the model. A pre-processing step to rescale all the images to 224 × 224 was applied to standardize the input to the neural network, and generalize the model for different platforms including single board computer and smartphones. The trained model showcased an average denomination recognition accuracy of 92.71% for an altogether different dataset. The trained model was deployed on Raspberry-Pi and Smartphone independently, and the outcome to detect the currency denomination from the image was successfully demonstrated. The model showcased adequate performance on different platforms, leading to the exploration of several other assistive applications based on the currency recognition model to improve the standard of living for visually challenged individuals.","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RASSE54974.2022.9989624","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Visually impaired persons find it extremely difficult to perform cash transactions in outdoor environments. For assisting the visually challenged individuals, a YOLOv5 based deep neural network was designed to detect image based currency denominations. Thereby aid in completing the authentic transaction. The robust model was trained for images with currency notes in different backgrounds, multiple sides of the currency notes presented, notes around cluttered objects, notes near reflective surfaces, and blurred images of the currency notes. An annotated and augmented dataset of around 10,000 original images was created for developing the model. A pre-processing step to rescale all the images to 224 × 224 was applied to standardize the input to the neural network, and generalize the model for different platforms including single board computer and smartphones. The trained model showcased an average denomination recognition accuracy of 92.71% for an altogether different dataset. The trained model was deployed on Raspberry-Pi and Smartphone independently, and the outcome to detect the currency denomination from the image was successfully demonstrated. The model showcased adequate performance on different platforms, leading to the exploration of several other assistive applications based on the currency recognition model to improve the standard of living for visually challenged individuals.