{"title":"在人机交互中,眼睛看到的是什么?","authors":"Roel Vertegaal","doi":"10.1145/507072.507084","DOIUrl":null,"url":null,"abstract":"In recent years, there has been a resurgence of interest in the use of eye tracking systems for interactive purposes. However, it is easy to be fooled by the interactive power of eye tracking. When first encountering eye based interaction, most people are genuinely impressed with the almost magical window into the mind of the user that it seems to provide. There are two reasons why this belief may lead to subsequent disappointment. Firstly, although current eye tracking equipment is far superior to that used in the seventies and early eighties, it is by no means perfect. For example, there is still the tradeoff between the use of an obtrusive head-based system or a desk-based system with limited head movement. Such technical problems continue to limit the usefulness of eye tracking as a generic form of input. Secondly, there are real methodological problems regarding the interpretation of eye input for use in graphical user interfaces. One example, the \"Midas Touch\" problem, is observed in systems that use eye movements to directly control a mouse cursor. When does the system decide that a user is interested in a visual object? Systems that implement dwell time for this purpose run the risk of disallowing visual scanning behavior, requiring users to control their eye movements for the purposes of output, rather than input. However, difficulties in the interpretation of visual interest remain even when systems use another input modality for signaling intent. Another classic methodological problem is exemplified by the application of eye movement recording in usability studies. Although eye fixations provide some of the best measures of visual interest, they do not provide a measure of cognitive interest. It is one thing to determine whether a user has observed certain visual information, but quite another to determine whether this information has in fact been processed or understood. Some of our technological problems can and will be solved. However, we believe that our methodological issues point to a more fundamental problem: What is the nature of the input information conveyed by eye movements and to what interactive functions can this information provide added value?","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"What do the eyes behold for human-computer interaction?\",\"authors\":\"Roel Vertegaal\",\"doi\":\"10.1145/507072.507084\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, there has been a resurgence of interest in the use of eye tracking systems for interactive purposes. However, it is easy to be fooled by the interactive power of eye tracking. When first encountering eye based interaction, most people are genuinely impressed with the almost magical window into the mind of the user that it seems to provide. There are two reasons why this belief may lead to subsequent disappointment. Firstly, although current eye tracking equipment is far superior to that used in the seventies and early eighties, it is by no means perfect. For example, there is still the tradeoff between the use of an obtrusive head-based system or a desk-based system with limited head movement. Such technical problems continue to limit the usefulness of eye tracking as a generic form of input. Secondly, there are real methodological problems regarding the interpretation of eye input for use in graphical user interfaces. One example, the \\\"Midas Touch\\\" problem, is observed in systems that use eye movements to directly control a mouse cursor. When does the system decide that a user is interested in a visual object? Systems that implement dwell time for this purpose run the risk of disallowing visual scanning behavior, requiring users to control their eye movements for the purposes of output, rather than input. However, difficulties in the interpretation of visual interest remain even when systems use another input modality for signaling intent. Another classic methodological problem is exemplified by the application of eye movement recording in usability studies. Although eye fixations provide some of the best measures of visual interest, they do not provide a measure of cognitive interest. It is one thing to determine whether a user has observed certain visual information, but quite another to determine whether this information has in fact been processed or understood. Some of our technological problems can and will be solved. However, we believe that our methodological issues point to a more fundamental problem: What is the nature of the input information conveyed by eye movements and to what interactive functions can this information provide added value?\",\"PeriodicalId\":127538,\"journal\":{\"name\":\"Eye Tracking Research & Application\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-03-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Eye Tracking Research & Application\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/507072.507084\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eye Tracking Research & Application","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/507072.507084","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
What do the eyes behold for human-computer interaction?
In recent years, there has been a resurgence of interest in the use of eye tracking systems for interactive purposes. However, it is easy to be fooled by the interactive power of eye tracking. When first encountering eye based interaction, most people are genuinely impressed with the almost magical window into the mind of the user that it seems to provide. There are two reasons why this belief may lead to subsequent disappointment. Firstly, although current eye tracking equipment is far superior to that used in the seventies and early eighties, it is by no means perfect. For example, there is still the tradeoff between the use of an obtrusive head-based system or a desk-based system with limited head movement. Such technical problems continue to limit the usefulness of eye tracking as a generic form of input. Secondly, there are real methodological problems regarding the interpretation of eye input for use in graphical user interfaces. One example, the "Midas Touch" problem, is observed in systems that use eye movements to directly control a mouse cursor. When does the system decide that a user is interested in a visual object? Systems that implement dwell time for this purpose run the risk of disallowing visual scanning behavior, requiring users to control their eye movements for the purposes of output, rather than input. However, difficulties in the interpretation of visual interest remain even when systems use another input modality for signaling intent. Another classic methodological problem is exemplified by the application of eye movement recording in usability studies. Although eye fixations provide some of the best measures of visual interest, they do not provide a measure of cognitive interest. It is one thing to determine whether a user has observed certain visual information, but quite another to determine whether this information has in fact been processed or understood. Some of our technological problems can and will be solved. However, we believe that our methodological issues point to a more fundamental problem: What is the nature of the input information conveyed by eye movements and to what interactive functions can this information provide added value?