A. Amutha, G. Sathish, N. Sakthivel, N. Shreyas, A. Nagendran, C. Srinivasan
{"title":"Portable Camera-Based Assistive Textan product Label Reading from Hand-Held objects for Blind Persons","authors":"A. Amutha, G. Sathish, N. Sakthivel, N. Shreyas, A. Nagendran, C. Srinivasan","doi":"10.21090/ijaerd.030657","DOIUrl":null,"url":null,"abstract":"A camera-based assistive text reading framework to help blind persons to read the text labels and product packaging from hand-held objects in their day to day life. To separate the object from jumbled background or preceding neighbouring objects in the camera vision, we initially propose an efficient and effective motion based method to define a district of interest (ROI) in the video by ask the consumer to tremble the object. This scheme extracts moving object region by a mixture-of-Gaussians-based background subtraction technique. In the extract ROI, text localization and recognition are conduct to obtain text details. To mechanically spotlight the text regions from the object ROI, we suggest a novel text localization algorithm by knowledge grade description of stroke orientations and distributions of edge pixels in an Ad a boost model.","PeriodicalId":13793,"journal":{"name":"International Journal of Advance Research and Innovative Ideas in Education","volume":"72 1","pages":"336-339"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Advance Research and Innovative Ideas in Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21090/ijaerd.030657","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
A camera-based assistive text reading framework to help blind persons to read the text labels and product packaging from hand-held objects in their day to day life. To separate the object from jumbled background or preceding neighbouring objects in the camera vision, we initially propose an efficient and effective motion based method to define a district of interest (ROI) in the video by ask the consumer to tremble the object. This scheme extracts moving object region by a mixture-of-Gaussians-based background subtraction technique. In the extract ROI, text localization and recognition are conduct to obtain text details. To mechanically spotlight the text regions from the object ROI, we suggest a novel text localization algorithm by knowledge grade description of stroke orientations and distributions of edge pixels in an Ad a boost model.