{"title":"通过眼睛注视模式预测意图","authors":"Fatemeh Koochaki, L. Najafizadeh","doi":"10.1109/BIOCAS.2018.8584665","DOIUrl":null,"url":null,"abstract":"Eye movement is a valuable (and in several cases, the only remaining) means of communication for impaired people with extremely limited motor or communication capabilities. In this paper, we present a new framework that utilizes eye gaze patterns as input, to predict user's intention for performing daily tasks. The proposed framework consists of two main modules. First, by clustering the eye gaze patterns, the regions of interest (ROIs) on the displayed image are extracted. A deep convolutional neural network is then trained and used to recognize the objects in each ROI. Finally, the intended task is predicted by using support vector machine (SVM) through learning the embedded relationship between recognized objects. The proposed framework is tested using data from 8 subjects, in an experiment considering 4 intended tasks as well as the scenario in which the user does not have a specific intention when looking at the displayed image. Results demonstrate an average accuracy of 95.68% across all tasks, confirming the efficacy of the proposed framework.","PeriodicalId":259162,"journal":{"name":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"Predicting Intention Through Eye Gaze Patterns\",\"authors\":\"Fatemeh Koochaki, L. Najafizadeh\",\"doi\":\"10.1109/BIOCAS.2018.8584665\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Eye movement is a valuable (and in several cases, the only remaining) means of communication for impaired people with extremely limited motor or communication capabilities. In this paper, we present a new framework that utilizes eye gaze patterns as input, to predict user's intention for performing daily tasks. The proposed framework consists of two main modules. First, by clustering the eye gaze patterns, the regions of interest (ROIs) on the displayed image are extracted. A deep convolutional neural network is then trained and used to recognize the objects in each ROI. Finally, the intended task is predicted by using support vector machine (SVM) through learning the embedded relationship between recognized objects. The proposed framework is tested using data from 8 subjects, in an experiment considering 4 intended tasks as well as the scenario in which the user does not have a specific intention when looking at the displayed image. Results demonstrate an average accuracy of 95.68% across all tasks, confirming the efficacy of the proposed framework.\",\"PeriodicalId\":259162,\"journal\":{\"name\":\"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)\",\"volume\":\"63 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BIOCAS.2018.8584665\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIOCAS.2018.8584665","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Eye movement is a valuable (and in several cases, the only remaining) means of communication for impaired people with extremely limited motor or communication capabilities. In this paper, we present a new framework that utilizes eye gaze patterns as input, to predict user's intention for performing daily tasks. The proposed framework consists of two main modules. First, by clustering the eye gaze patterns, the regions of interest (ROIs) on the displayed image are extracted. A deep convolutional neural network is then trained and used to recognize the objects in each ROI. Finally, the intended task is predicted by using support vector machine (SVM) through learning the embedded relationship between recognized objects. The proposed framework is tested using data from 8 subjects, in an experiment considering 4 intended tasks as well as the scenario in which the user does not have a specific intention when looking at the displayed image. Results demonstrate an average accuracy of 95.68% across all tasks, confirming the efficacy of the proposed framework.