首页 > 最新文献

Eye Tracking Research & Application最新文献

英文 中文
Robust clustering of eye movement recordings for quantification of visual interest 用于量化视觉兴趣的眼动记录鲁棒聚类
Pub Date : 2004-03-22 DOI: 10.1145/968363.968368
A. Santella, D. DeCarlo
Characterizing the location and extent of a viewer's interest, in terms of eye movement recordings, informs a range of investigations in image and scene viewing. We present an automatic data-driven method for accomplishing this, which clusters visual point-of-regard (POR) measurements into gazes and regions-of-interest using the mean shift procedure. Clusters produced using this method form a structured representation of viewer interest, and at the same time are replicable and not heavily influenced by noise or outliers. Thus, they are useful in answering fine-grained questions about where and how a viewer examined an image.
根据眼球运动记录来描述观看者兴趣的位置和程度,可以为图像和场景观看的一系列调查提供信息。我们提出了一种自动数据驱动的方法来实现这一目标,该方法使用平均移位过程将视觉关注点(POR)测量聚类到凝视和感兴趣的区域。使用这种方法产生的群集形成了观众兴趣的结构化表示,同时是可复制的,不受噪声或异常值的严重影响。因此,它们在回答有关观看者在何处以及如何检查图像的细粒度问题时非常有用。
{"title":"Robust clustering of eye movement recordings for quantification of visual interest","authors":"A. Santella, D. DeCarlo","doi":"10.1145/968363.968368","DOIUrl":"https://doi.org/10.1145/968363.968368","url":null,"abstract":"Characterizing the location and extent of a viewer's interest, in terms of eye movement recordings, informs a range of investigations in image and scene viewing. We present an automatic data-driven method for accomplishing this, which clusters visual point-of-regard (POR) measurements into gazes and regions-of-interest using the mean shift procedure. Clusters produced using this method form a structured representation of viewer interest, and at the same time are replicable and not heavily influenced by noise or outliers. Thus, they are useful in answering fine-grained questions about where and how a viewer examined an image.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126988374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 191
A free-head, simple calibration, gaze tracking system that enables gaze-based interaction 一个自由的头,简单的校准,凝视跟踪系统,使基于凝视的互动
Pub Date : 2004-03-22 DOI: 10.1145/968363.968387
Takehiko Ohno, N. Mukawa
Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle).
人眼注视是创建基于人机交互的新应用领域的有力候选者。为了实现一个真正实用的基于凝视的交互系统,必须在不限制用户行为或舒适度的情况下实现凝视检测。本文描述了一种提供自由、简单的个人校准的凝视跟踪系统。它不需要使用者在头上戴任何东西,而且她可以自由地移动她的头。个人校准只需要很短的时间;用户被要求看着屏幕上的两个标记。实验结果表明,该系统的视觉精度约为1.0度(视角)。
{"title":"A free-head, simple calibration, gaze tracking system that enables gaze-based interaction","authors":"Takehiko Ohno, N. Mukawa","doi":"10.1145/968363.968387","DOIUrl":"https://doi.org/10.1145/968363.968387","url":null,"abstract":"Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle).","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131425634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 184
Gaze typing compared with input by head and hand 凝视输入与头和手输入的比较
Pub Date : 2004-03-22 DOI: 10.1145/968363.968389
J. P. Hansen, K. Tørning, A. Johansen, K. Itoh, Hirotaka Aoki
This paper investigates the usability of gaze-typing systems for disabled people in a broad perspective that takes into account the usage scenarios and the particular users that these systems benefit. Design goals for a gaze-typing system are identified: productivity above 25 words per minute, robust tracking, high availability, and support of multimodal input. A detailed investigation of the efficiency and user satisfaction with a Danish and a Japanese gaze-typing system compares it to head- and mouse (hand) - typing. We found gaze typing to be more erroneous than the other two modalities. Gaze typing was just as fast as head typing, and both were slower than mouse (hand-) typing. Possibilities for design improvements are discussed.
本文从一个广泛的角度研究了残疾人注视输入系统的可用性,该系统考虑了使用场景和这些系统受益的特定用户。确定了注视输入系统的设计目标:每分钟超过25个单词的生产率、可靠的跟踪、高可用性和支持多模式输入。一项对丹麦和日本注视打字系统的效率和用户满意度的详细调查将其与头鼠标(手)打字系统进行了比较。我们发现凝视输入比其他两种方式更容易出错。凝视打字和头部打字一样快,两者都比鼠标(手)打字慢。讨论了设计改进的可能性。
{"title":"Gaze typing compared with input by head and hand","authors":"J. P. Hansen, K. Tørning, A. Johansen, K. Itoh, Hirotaka Aoki","doi":"10.1145/968363.968389","DOIUrl":"https://doi.org/10.1145/968363.968389","url":null,"abstract":"This paper investigates the usability of gaze-typing systems for disabled people in a broad perspective that takes into account the usage scenarios and the particular users that these systems benefit. Design goals for a gaze-typing system are identified: productivity above 25 words per minute, robust tracking, high availability, and support of multimodal input. A detailed investigation of the efficiency and user satisfaction with a Danish and a Japanese gaze-typing system compares it to head- and mouse (hand) - typing. We found gaze typing to be more erroneous than the other two modalities. Gaze typing was just as fast as head typing, and both were slower than mouse (hand-) typing. Possibilities for design improvements are discussed.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130452125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 143
Building a lightweight eyetracking headgear 打造一个轻量级的眼球追踪头盔
Pub Date : 2004-03-22 DOI: 10.1145/968363.968386
J. Babcock, J. Pelz
Eyetracking systems that use video-based cameras to monitor the eye and scene can be made significantly smaller thanks to tiny micro-lens video cameras. Pupil detection algorithms are generally implemented in hardware, allowing for real-time eyetracking. However, it is likely that real-time eyetracking will soon be fully accomplished in software alone. This paper encourages an "open-source" approach to eyetracking by providing practical tips on building a lightweight eyetracker from commercially available micro-lens cameras and other parts. While the headgear described here can be used with any dark-pupil eyetracking controller, it also opens the door to open-source software solutions that could be developed by the eyetracking and image-processing communities. Such systems could be optimized without concern for real-time performance because the systems could be run offline.
由于微型微镜头摄像机的出现,使用基于视频的摄像机来监控眼睛和场景的眼球追踪系统可以做得更小。瞳孔检测算法通常在硬件中实现,允许实时眼球跟踪。然而,实时眼球追踪很可能很快就会完全在软件中完成。这篇论文鼓励采用“开源”的方法来进行眼球追踪,提供了一些实用的技巧,可以用市面上可以买到的微镜头相机和其他部件来制造轻量级的眼球追踪器。虽然这里描述的头盔可以与任何暗瞳眼球追踪控制器一起使用,但它也为眼球追踪和图像处理社区开发的开源软件解决方案打开了大门。这样的系统可以在不考虑实时性能的情况下进行优化,因为系统可以离线运行。
{"title":"Building a lightweight eyetracking headgear","authors":"J. Babcock, J. Pelz","doi":"10.1145/968363.968386","DOIUrl":"https://doi.org/10.1145/968363.968386","url":null,"abstract":"Eyetracking systems that use video-based cameras to monitor the eye and scene can be made significantly smaller thanks to tiny micro-lens video cameras. Pupil detection algorithms are generally implemented in hardware, allowing for real-time eyetracking. However, it is likely that real-time eyetracking will soon be fully accomplished in software alone. This paper encourages an \"open-source\" approach to eyetracking by providing practical tips on building a lightweight eyetracker from commercially available micro-lens cameras and other parts. While the headgear described here can be used with any dark-pupil eyetracking controller, it also opens the door to open-source software solutions that could be developed by the eyetracking and image-processing communities. Such systems could be optimized without concern for real-time performance because the systems could be run offline.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"3 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125691184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 166
Eye gaze patterns differentiate novice and experts in a virtual laparoscopic surgery training environment 眼睛注视模式区分新手和专家在虚拟腹腔镜手术训练环境
Pub Date : 2004-03-22 DOI: 10.1145/968363.968370
Benjamin Law, S. Fraser, Stella Atkins, A. Kirkpatrick, A. Lomax, C. MacKenzie
Visual information is important in surgeons' manipulative performance especially in laparoscopic surgery where tactual feedback is less than in open surgery. The study of surgeons' eye movements is an innovative way of assessing skill, in that a comparison of the eye movement strategies between expert surgeons and novices may show important differences that could be used in training. We conducted a preliminary study comparing the eye movements of 5 experts and 5 novices performing a one-handed aiming task on a computer-based laparoscopic surgery simulator. The performance results showed that experts were quicker and generally committed fewer errors than novices. We investigated eye movements as a possible factor for experts performing better than novices. The results from eye gaze analysis showed that novices needed more visual feedback of the tool position to complete the task than did experts. In addition, the experts tended to maintain eye gaze on the target while manipulating the tool, whereas novices were more varied in their behaviours. For example, we found that on some trials, novices tracked the movement of the tool until it reached the target.
视觉信息对外科医生的操作表现很重要,特别是在腹腔镜手术中,触觉反馈比开放手术少。对外科医生眼球运动的研究是评估技能的一种创新方式,因为对专家外科医生和新手外科医生的眼球运动策略进行比较可能会显示出可用于培训的重要差异。我们进行了一项初步研究,比较了5名专家和5名新手在计算机腹腔镜手术模拟器上进行单手瞄准任务的眼球运动。性能结果表明,专家的速度更快,而且通常比新手犯的错误更少。我们调查了眼动作为专家比新手表现更好的可能因素。眼睛注视分析的结果表明,新手比专家需要更多的工具位置视觉反馈来完成任务。此外,专家在操作工具时倾向于保持眼睛盯着目标,而新手的行为则更加多样化。例如,我们发现在一些试验中,新手跟踪工具的运动,直到它到达目标。
{"title":"Eye gaze patterns differentiate novice and experts in a virtual laparoscopic surgery training environment","authors":"Benjamin Law, S. Fraser, Stella Atkins, A. Kirkpatrick, A. Lomax, C. MacKenzie","doi":"10.1145/968363.968370","DOIUrl":"https://doi.org/10.1145/968363.968370","url":null,"abstract":"Visual information is important in surgeons' manipulative performance especially in laparoscopic surgery where tactual feedback is less than in open surgery. The study of surgeons' eye movements is an innovative way of assessing skill, in that a comparison of the eye movement strategies between expert surgeons and novices may show important differences that could be used in training. We conducted a preliminary study comparing the eye movements of 5 experts and 5 novices performing a one-handed aiming task on a computer-based laparoscopic surgery simulator. The performance results showed that experts were quicker and generally committed fewer errors than novices. We investigated eye movements as a possible factor for experts performing better than novices. The results from eye gaze analysis showed that novices needed more visual feedback of the tool position to complete the task than did experts. In addition, the experts tended to maintain eye gaze on the target while manipulating the tool, whereas novices were more varied in their behaviours. For example, we found that on some trials, novices tracked the movement of the tool until it reached the target.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116849001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 251
Age differences in visual search for information on web pages 网页信息视觉搜索的年龄差异
Pub Date : 2004-03-22 DOI: 10.1145/968363.968379
S. Josephson, Michael E. Holmes
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail permissions@acm.org. © 2004 ACM 1-58113-825-3/04/0003 $5.00 Age Differences in Visual Search for Information on Web Pages
允许制作部分或全部作品的数字或硬拷贝供个人或课堂使用,但不收取任何费用,前提是副本不是出于商业利益而制作或分发的,并且副本在第一页上带有本通知和完整的引用。本作品组件的版权归ACM以外的其他人所有,必须得到尊重。允许有信用的摘要。以其他方式复制、重新发布、在服务器上发布或重新分发到列表,需要事先获得特定许可和/或付费。请联系ACM公司权限部,传真+1(212)8669 -0481或发邮件至permissions@acm.org。©2004 ACM 1-58113-825-3/04/0003 $5.00网页信息视觉搜索的年龄差异
{"title":"Age differences in visual search for information on web pages","authors":"S. Josephson, Michael E. Holmes","doi":"10.1145/968363.968379","DOIUrl":"https://doi.org/10.1145/968363.968379","url":null,"abstract":"Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail permissions@acm.org. © 2004 ACM 1-58113-825-3/04/0003 $5.00 Age Differences in Visual Search for Information on Web Pages","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116296003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Visual deictic reference in a collaborative virtual environment 协同虚拟环境中的视觉指示参考
Pub Date : 2004-03-22 DOI: 10.1145/968363.968369
A. Duchowski, Nathan Cournia, Brian Cumming, Daniel McCallum, A. Gramopadhye, J. Greenstein, Sajay Sadasivan, R. Tyrrell
This paper evaluates the use of Visual Deictic Reference (VDR) in Collaborative Virtual Environments (CVEs). A simple CVE capable of hosting two (or more) participants simultaneously immersed in the same virtual environment is used as the testbed. One participant's VDR, obtained by tracking the participant's gaze, is projected to co-participants' environments in real-time as a colored lightspot. We compare the VDR lightspot when it is eye-slaved to when it is head-slaved and show that an eye-slaved VDR helps disambiguate the deictic point of reference, especially during conditions when the user's line of sight is decoupled from their head direction.
本文评估了视觉指示参考(VDR)在协同虚拟环境(cve)中的应用。使用一个简单的CVE作为测试平台,该CVE能够同时托管沉浸在同一虚拟环境中的两个(或更多)参与者。通过跟踪参与者的目光获得的一个参与者的VDR,以彩色光点的形式实时投影到其他参与者的环境中。我们比较了眼睛从属和头部从属时的VDR光点,并表明眼睛从属的VDR有助于消除指示参考点的歧义,特别是在用户的视线与头部方向分离的情况下。
{"title":"Visual deictic reference in a collaborative virtual environment","authors":"A. Duchowski, Nathan Cournia, Brian Cumming, Daniel McCallum, A. Gramopadhye, J. Greenstein, Sajay Sadasivan, R. Tyrrell","doi":"10.1145/968363.968369","DOIUrl":"https://doi.org/10.1145/968363.968369","url":null,"abstract":"This paper evaluates the use of Visual Deictic Reference (VDR) in Collaborative Virtual Environments (CVEs). A simple CVE capable of hosting two (or more) participants simultaneously immersed in the same virtual environment is used as the testbed. One participant's VDR, obtained by tracking the participant's gaze, is projected to co-participants' environments in real-time as a colored lightspot. We compare the VDR lightspot when it is eye-slaved to when it is head-slaved and show that an eye-slaved VDR helps disambiguate the deictic point of reference, especially during conditions when the user's line of sight is decoupled from their head direction.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127252926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Head movement estimation for wearable eye tracker 基于可穿戴眼动仪的头部运动估计
Pub Date : 2004-03-22 DOI: 10.1145/968363.968388
C. Rothkopf, J. Pelz
In the study of eye movements in natural tasks, where subjects are able to freely move in their environment, it is desirable to capture a video of the surroundings of the subject not limited to a small field of view as obtained by the scene camera of an eye tracker. Moreover, recovering the head movements could give additional information about the type of eye movement that was carried out, the overall gaze change in world coordinates, and insight into high-order perceptual strategies. Algorithms for the classification of eye movements in such natural tasks could also benefit form the additional head movement data.We propose to use an omnidirectional vision sensor consisting of a small CCD video camera and a hyperbolic mirror. The camera is mounted on an ASL eye tracker and records an image sequence at 60 Hz. Several algorithms for the extraction of rotational motion from this image sequence were implemented and compared in their performance against the measurements of a Fasttrack magnetic tracking system. Using data from the eye tracker together with the data obtained by the omnidirectional image sensor, a new algorithm for the classification of different types of eye movements based on a Hidden-Markov-Model was developed.
在研究自然任务中的眼动时,受试者能够在其环境中自由移动,因此希望捕捉受试者周围环境的视频,而不限于眼动仪的现场摄像机所获得的小视场。此外,恢复头部运动可以提供有关所进行的眼球运动类型的额外信息,世界坐标的整体凝视变化,以及对高阶感知策略的洞察。在这种自然任务中,对眼球运动进行分类的算法也可以从额外的头部运动数据中受益。我们建议使用由小型CCD摄像机和双曲镜组成的全向视觉传感器。摄像头安装在ASL眼动仪上,记录60赫兹的图像序列。实现了几种从该图像序列中提取旋转运动的算法,并将其性能与Fasttrack磁跟踪系统的测量结果进行了比较。利用眼动仪数据和全向图像传感器数据,提出了一种基于隐马尔可夫模型的不同类型眼动分类算法。
{"title":"Head movement estimation for wearable eye tracker","authors":"C. Rothkopf, J. Pelz","doi":"10.1145/968363.968388","DOIUrl":"https://doi.org/10.1145/968363.968388","url":null,"abstract":"In the study of eye movements in natural tasks, where subjects are able to freely move in their environment, it is desirable to capture a video of the surroundings of the subject not limited to a small field of view as obtained by the scene camera of an eye tracker. Moreover, recovering the head movements could give additional information about the type of eye movement that was carried out, the overall gaze change in world coordinates, and insight into high-order perceptual strategies. Algorithms for the classification of eye movements in such natural tasks could also benefit form the additional head movement data.We propose to use an omnidirectional vision sensor consisting of a small CCD video camera and a hyperbolic mirror. The camera is mounted on an ASL eye tracker and records an image sequence at 60 Hz. Several algorithms for the extraction of rotational motion from this image sequence were implemented and compared in their performance against the measurements of a Fasttrack magnetic tracking system. Using data from the eye tracker together with the data obtained by the omnidirectional image sensor, a new algorithm for the classification of different types of eye movements based on a Hidden-Markov-Model was developed.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133754634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
ECSGlasses and EyePliances: using attention to open sociable windows of interaction ECSGlasses和EyePliances:用注意力打开社交互动的窗口
Pub Date : 2004-03-22 DOI: 10.1145/968363.968384
Jeffrey S. Shell, Roel Vertegaal, D. Cheng, Alexander W. Skaburskis, Changuk Sohn, A. James Stewart, Omar Aoudeh, C. Dickie
We present ECSGlasses: wearable eye contact sensing glasses that detect human eye contact. ECSGlasses report eye contact to digital devices, appliances and EyePliances in the user's attention space. Devices use this attentional cue to engage in a more sociable process of turn taking with users. This has the potential to reduce inappropriate intrusions, and limit their disruptiveness. We describe new prototype systems, including the Attentive Messaging Service (AMS), the Attentive Hit Counter, the first person attentive camcorder eyeBlog, and an updated Attentive Cell Phone. We also discuss the potential of these devices to open new windows of interaction using attention as a communication modality. Further, we present a novel signal-encoding scheme to uniquely identify EyePliances and users wearing ECSGlasses in multiparty scenarios.
我们提出了ECSGlasses:可穿戴的眼睛接触传感眼镜,检测人类的眼睛接触。ECSGlasses将目光接触报告给用户注意空间中的数字设备、电器和EyePliances。设备利用这种注意力线索与用户进行更社会化的互动。这有可能减少不适当的入侵,并限制其破坏性。我们描述了新的原型系统,包括专注短信服务(AMS),专注点击计数器,第一人称专注摄像机eyeBlog和更新的专注手机。我们还讨论了这些设备打开新的交互窗口的潜力,将注意力作为一种沟通方式。此外,我们提出了一种新的信号编码方案,用于在多方场景中唯一识别eyepliance和佩戴ECSGlasses的用户。
{"title":"ECSGlasses and EyePliances: using attention to open sociable windows of interaction","authors":"Jeffrey S. Shell, Roel Vertegaal, D. Cheng, Alexander W. Skaburskis, Changuk Sohn, A. James Stewart, Omar Aoudeh, C. Dickie","doi":"10.1145/968363.968384","DOIUrl":"https://doi.org/10.1145/968363.968384","url":null,"abstract":"We present ECSGlasses: wearable eye contact sensing glasses that detect human eye contact. ECSGlasses report eye contact to digital devices, appliances and EyePliances in the user's attention space. Devices use this attentional cue to engage in a more sociable process of turn taking with users. This has the potential to reduce inappropriate intrusions, and limit their disruptiveness. We describe new prototype systems, including the Attentive Messaging Service (AMS), the Attentive Hit Counter, the first person attentive camcorder eyeBlog, and an updated Attentive Cell Phone. We also discuss the potential of these devices to open new windows of interaction using attention as a communication modality. Further, we present a novel signal-encoding scheme to uniquely identify EyePliances and users wearing ECSGlasses in multiparty scenarios.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133217796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
A gaze contingent environment for fostering social attention in autistic children 培养自闭症儿童社会注意力的注视偶然环境
Pub Date : 2004-03-22 DOI: 10.1145/968363.968367
R. Ramloll, C. Trepagnier, M. Sebrechts, A. Finkelmeyer
This paper documents the engineering of a gaze contingent therapeutic environment for the exploration and validation of a proposed rehabilitative technique addressing attention deficits in 24 to 54 months old autistic subjects. It discusses the current state of progress and lessons learnt so far while highlighting the outstanding engineering challenges of this project. We focus on calibration issues for this target group of users, explain the architecture of the system and present our general workflow for the construction of the gaze contingent environment. While this work is being undertaken for therapeutic purposes, it is likely to be relevant to the construction of gaze contingent displays for entertainment.
本文记录了一个凝视随机治疗环境的工程,用于探索和验证一种针对24至54个月大自闭症受试者的注意力缺陷的拟议康复技术。它讨论了目前的进展状况和迄今为止吸取的教训,同时强调了该项目面临的突出工程挑战。我们专注于这一目标用户群的校准问题,解释系统的架构,并介绍我们构建凝视随机环境的一般工作流程。虽然这项工作是为了治疗目的而进行的,但它很可能与娱乐的注视偶然显示的构建有关。
{"title":"A gaze contingent environment for fostering social attention in autistic children","authors":"R. Ramloll, C. Trepagnier, M. Sebrechts, A. Finkelmeyer","doi":"10.1145/968363.968367","DOIUrl":"https://doi.org/10.1145/968363.968367","url":null,"abstract":"This paper documents the engineering of a gaze contingent therapeutic environment for the exploration and validation of a proposed rehabilitative technique addressing attention deficits in 24 to 54 months old autistic subjects. It discusses the current state of progress and lessons learnt so far while highlighting the outstanding engineering challenges of this project. We focus on calibration issues for this target group of users, explain the architecture of the system and present our general workflow for the construction of the gaze contingent environment. While this work is being undertaken for therapeutic purposes, it is likely to be relevant to the construction of gaze contingent displays for entertainment.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115306599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
Eye Tracking Research & Application
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1