Supriya Sathyanarayana, R. Satzoda, T. Srikanthan, S. Sathyanarayana
{"title":"Compute-efficient eye state detection: algorithm, dataset and evaluations","authors":"Supriya Sathyanarayana, R. Satzoda, T. Srikanthan, S. Sathyanarayana","doi":"10.1145/2789116.2789144","DOIUrl":null,"url":null,"abstract":"Eye state can be used as an important cue to monitor the wellness of a patient. In this paper, we propose a computationally efficient eye state detection technique in the context of patient monitoring. The proposed method uses weighted accumulations of intensity and gradients, along with a color thresholding on a reduced set of pixels to extract the various features of the eye, which in turn are used for inferring the eye state. Additionally, we present a dataset of 2500 images that was created for evaluating the proposed technique. The method was shown to effectively differentiate open, closed and half-closed eye states with an accuracy of 91.3% when evaluated on the dataset. The computational cost of the proposed technique is evaluated and is shown to achieve about 67% savings with respect to the state of art.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 9th International Conference on Distributed Smart Cameras","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2789116.2789144","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Eye state can be used as an important cue to monitor the wellness of a patient. In this paper, we propose a computationally efficient eye state detection technique in the context of patient monitoring. The proposed method uses weighted accumulations of intensity and gradients, along with a color thresholding on a reduced set of pixels to extract the various features of the eye, which in turn are used for inferring the eye state. Additionally, we present a dataset of 2500 images that was created for evaluating the proposed technique. The method was shown to effectively differentiate open, closed and half-closed eye states with an accuracy of 91.3% when evaluated on the dataset. The computational cost of the proposed technique is evaluated and is shown to achieve about 67% savings with respect to the state of art.