首页 > 最新文献

Gaze-In '12最新文献

英文 中文
A head-eye coordination model for animating gaze shifts of virtual characters 虚拟人物视线移动动画的头眼协调模型
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401840
Sean Andrist, T. Pejsa, Bilge Mutlu, Michael Gleicher
We present a parametric, computational model of head-eye coordination that can be used in the animation of directed gaze shifts for virtual characters. The model is based on research in human neurophysiology. It incorporates control parameters that allow for adapting gaze shifts to the characteristics of the environment, the gaze targets, and the idiosyncratic behavioral attributes of the virtual character. A user study confirms that the model communicates gaze targets as effectively as real humans do, while being preferred subjectively to state-of-the-art models.
我们提出了一个参数化的头眼协调计算模型,可用于虚拟角色的定向凝视移动动画。该模型基于人类神经生理学的研究。它结合了控制参数,允许根据环境特征、凝视目标和虚拟角色的特殊行为属性调整凝视转移。一项用户研究证实,该模型与真实的人类一样有效地交流凝视目标,同时在主观上比最先进的模型更受青睐。
{"title":"A head-eye coordination model for animating gaze shifts of virtual characters","authors":"Sean Andrist, T. Pejsa, Bilge Mutlu, Michael Gleicher","doi":"10.1145/2401836.2401840","DOIUrl":"https://doi.org/10.1145/2401836.2401840","url":null,"abstract":"We present a parametric, computational model of head-eye coordination that can be used in the animation of directed gaze shifts for virtual characters. The model is based on research in human neurophysiology. It incorporates control parameters that allow for adapting gaze shifts to the characteristics of the environment, the gaze targets, and the idiosyncratic behavioral attributes of the virtual character. A user study confirms that the model communicates gaze targets as effectively as real humans do, while being preferred subjectively to state-of-the-art models.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124284185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Perception of gaze direction for situated interaction 情境互动中凝视方向的感知
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401839
Samer Al Moubayed, Gabriel Skantze
Accurate human perception of robots' gaze direction is crucial for the design of a natural and fluent situated multimodal face-to-face interaction between humans and machines. In this paper, we present an experiment targeted at quantifying the effects of different gaze cues synthesized using the Furhat back-projected robot head, on the accuracy of perceived spatial direction of gaze by humans using 18 test subjects. The study first quantifies the accuracy of the perceived gaze direction in a human-human setup, and compares that to the use of synthesized gaze movements in different conditions: viewing the robot eyes frontal or at a 45 degrees angle side view. We also study the effect of 3D gaze by controlling both eyes to indicate the depth of the focal point (vergence), the use of gaze or head pose, and the use of static or dynamic eyelids. The findings of the study are highly relevant to the design and control of robots and animated agents in situated face-to-face interaction.
人类对机器人注视方向的准确感知对于设计自然流畅的多模态人机交互至关重要。在本文中,我们提出了一项实验,旨在量化使用Furhat反向投影机器人头部合成的不同凝视线索对人类感知凝视空间方向准确性的影响。该研究首先量化了在人-人设置中感知到的凝视方向的准确性,并将其与在不同条件下使用的合成凝视运动进行了比较:观看机器人眼睛的正面或45度角的侧面视图。我们还研究了3D凝视的效果,通过控制双眼来指示焦点的深度(聚焦点),使用凝视或头部姿势,以及使用静态或动态眼睑。该研究结果与机器人和动画代理在情境面对面互动中的设计和控制高度相关。
{"title":"Perception of gaze direction for situated interaction","authors":"Samer Al Moubayed, Gabriel Skantze","doi":"10.1145/2401836.2401839","DOIUrl":"https://doi.org/10.1145/2401836.2401839","url":null,"abstract":"Accurate human perception of robots' gaze direction is crucial for the design of a natural and fluent situated multimodal face-to-face interaction between humans and machines. In this paper, we present an experiment targeted at quantifying the effects of different gaze cues synthesized using the Furhat back-projected robot head, on the accuracy of perceived spatial direction of gaze by humans using 18 test subjects. The study first quantifies the accuracy of the perceived gaze direction in a human-human setup, and compares that to the use of synthesized gaze movements in different conditions: viewing the robot eyes frontal or at a 45 degrees angle side view. We also study the effect of 3D gaze by controlling both eyes to indicate the depth of the focal point (vergence), the use of gaze or head pose, and the use of static or dynamic eyelids. The findings of the study are highly relevant to the design and control of robots and animated agents in situated face-to-face interaction.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128248474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Simple multi-party video conversation system focused on participant eye gaze: "Ptolemaeus" provides participants with smooth turn-taking 简单的多人视频对话系统,专注于参与者的目光:“托勒密”为参与者提供顺畅的轮转
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401851
Saori Yamamoto, Nazomu Teraya, Yumika Nakamura, N. Watanabe, Yande Lin, M. Bono, Yugo Takeuchi
This paper shows a prototype system that provides a natural multi-party conversation environment among participants in different places. Eye gaze is an important feature for maintaining smooth multi-party conversations because it indicates whom the speech is addressing or nominates the next speaker. Nevertheless, most popular video conversation systems, such as Skype or FaceTime, do not support the interaction of eye gaze. Serious confusion is caused in multi-party video conversation systems that have no eye gaze support. For example, who is the addressee of the speech? Who is the next speaker? We propose a simple multi-party video conversation environment called Ptolemaeus that realizes eye gaze interaction among more than three participants without any special equipment. This system provides natural turn-taking in face-to-face video conversations and can be implemented more easily than former schemes concerned with eye gaze interaction.
本文展示了一个原型系统,该系统为不同地点的参与者提供了一个自然的多方对话环境。眼睛注视是保持顺畅的多方对话的一个重要特征,因为它表明讲话的对象或提名下一位发言者。然而,大多数流行的视频对话系统,如Skype或FaceTime,不支持眼睛注视的互动。在没有眼睛注视支持的多方视频对话系统中会造成严重的混乱。例如,演讲的收件人是谁?下一位发言者是谁?我们提出了一种简单的多人视频对话环境,称为Ptolemaeus,可以在不需要任何特殊设备的情况下实现三人以上参与者的眼神互动。这个系统在面对面的视频对话中提供了自然的轮流,比以前的眼神互动方案更容易实现。
{"title":"Simple multi-party video conversation system focused on participant eye gaze: \"Ptolemaeus\" provides participants with smooth turn-taking","authors":"Saori Yamamoto, Nazomu Teraya, Yumika Nakamura, N. Watanabe, Yande Lin, M. Bono, Yugo Takeuchi","doi":"10.1145/2401836.2401851","DOIUrl":"https://doi.org/10.1145/2401836.2401851","url":null,"abstract":"This paper shows a prototype system that provides a natural multi-party conversation environment among participants in different places. Eye gaze is an important feature for maintaining smooth multi-party conversations because it indicates whom the speech is addressing or nominates the next speaker. Nevertheless, most popular video conversation systems, such as Skype or FaceTime, do not support the interaction of eye gaze. Serious confusion is caused in multi-party video conversation systems that have no eye gaze support. For example, who is the addressee of the speech? Who is the next speaker? We propose a simple multi-party video conversation environment called Ptolemaeus that realizes eye gaze interaction among more than three participants without any special equipment. This system provides natural turn-taking in face-to-face video conversations and can be implemented more easily than former schemes concerned with eye gaze interaction.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129275768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Analysis on learners' gaze patterns and the instructor's reactions in ballroom dance tutoring 交际舞教学中学习者注视模式与教师反应分析
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401844
Kosuke Kimura, Hung-Hsuan Huang, K. Kawagoe
The use of virtual conversational agents is awaited in the tutoring of physical skills such as sports or dances. This paper describes about an ongoing project aiming to realize a virtual instructor for ballroom dance. First, a human-human experiment is conducted to collect the interaction corpus between a professional instructor and six learners. The verbal and non-verbal behaviors of the instructor is analyzed and served as the base of a state transition model for ballroom dance tutoring. In order to achieve intuitive and efficient instruction during the multi-modal interaction between the virtual instructor and the learner, the eye gaze patterns of the learner and the reaction from the instructor were analyzed. From the analysis results, it was found that the learner's attitude (confidence and concentration) could be approximated by their gaze patterns, and the instructor's tutoring strategy supported this as well.
在体育或舞蹈等体育技能的辅导中,虚拟会话代理的使用正在等待。本文介绍了一个正在进行的项目,旨在实现交际舞虚拟教练。首先,进行人-人实验,收集一名专业讲师与六名学习者之间的互动语料库。对指导员的语言和非语言行为进行了分析,并以此为基础建立了交际舞教学的状态转换模型。为了在虚拟教师与学习者的多模态交互中实现直观高效的教学,对学习者的注视模式和教师的反应进行了分析。从分析结果来看,学习者的注视模式可以近似地反映学习者的态度(信心和注意力),教师的辅导策略也支持这一点。
{"title":"Analysis on learners' gaze patterns and the instructor's reactions in ballroom dance tutoring","authors":"Kosuke Kimura, Hung-Hsuan Huang, K. Kawagoe","doi":"10.1145/2401836.2401844","DOIUrl":"https://doi.org/10.1145/2401836.2401844","url":null,"abstract":"The use of virtual conversational agents is awaited in the tutoring of physical skills such as sports or dances. This paper describes about an ongoing project aiming to realize a virtual instructor for ballroom dance. First, a human-human experiment is conducted to collect the interaction corpus between a professional instructor and six learners. The verbal and non-verbal behaviors of the instructor is analyzed and served as the base of a state transition model for ballroom dance tutoring. In order to achieve intuitive and efficient instruction during the multi-modal interaction between the virtual instructor and the learner, the eye gaze patterns of the learner and the reaction from the instructor were analyzed. From the analysis results, it was found that the learner's attitude (confidence and concentration) could be approximated by their gaze patterns, and the instructor's tutoring strategy supported this as well.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128721979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressee identification for human-human-agent multiparty conversations in different proxemics 不同语义下人-人-agent多方对话的收信人识别
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401842
N. Baba, Hung-Hsuan Huang, Y. Nakano
This paper proposes a method for identifying the addressee based on speech and gaze information, and shows that the proposed method can be applicable to human-human-agent multiparty conversations in different proxemics. First, we collected human-human-agent interaction in different proxemics, and by analyzing the data, we found that people spoke with a higher tone of voice and more loudly and slowly when they talked to the agent. We also confirmed that this speech style was consistent regardless of the proxemics. Then, by employing SVM, we proposed a general addressee estimation model that can be used in different proxemics, and the model achieved over 80% accuracy in 10-fold cross-validation.
本文提出了一种基于语音和注视信息的收件人识别方法,并证明了该方法可以适用于不同语义的人-人-智能体多方对话。首先,我们收集了不同近体学中人与人之间的交互,通过分析数据,我们发现当人们与代理人交谈时,他们说话的音调更高,声音更大,速度更慢。我们也证实了这种说话方式是一致的,不管近体法如何。然后,利用支持向量机提出了一种可用于不同邻域的通用地址估计模型,该模型在10倍交叉验证中达到了80%以上的准确率。
{"title":"Addressee identification for human-human-agent multiparty conversations in different proxemics","authors":"N. Baba, Hung-Hsuan Huang, Y. Nakano","doi":"10.1145/2401836.2401842","DOIUrl":"https://doi.org/10.1145/2401836.2401842","url":null,"abstract":"This paper proposes a method for identifying the addressee based on speech and gaze information, and shows that the proposed method can be applicable to human-human-agent multiparty conversations in different proxemics. First, we collected human-human-agent interaction in different proxemics, and by analyzing the data, we found that people spoke with a higher tone of voice and more loudly and slowly when they talked to the agent. We also confirmed that this speech style was consistent regardless of the proxemics. Then, by employing SVM, we proposed a general addressee estimation model that can be used in different proxemics, and the model achieved over 80% accuracy in 10-fold cross-validation.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129120290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Visual interaction and conversational activity 视觉互动和会话活动
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401847
Andres Levitski, J. Radun, Kristiina Jokinen
In addition to the contents of their speech, people who are engaged in a conversation express themselves in many nonverbal ways. This means that people interact and are attended to even when they are not speaking. In this pilot study, we created an experimental setup for a three-party interactive situation where one of the participants remained silent throughout the session, and the gaze of one of the active subjects was tracked. The eye-tracked subject was unaware of the setup. The pilot study used only two test subjects, but the results provide some clues towards estimating how the behavior and activity of the non-speaking participant might affect other participants' conversational activity and the situation itself. We also found that the speaker's gaze activity is different in the beginning of the utterance than at the end of the utterance, indicating that the speaker's focus of attention towards the partner differs depending on the turn taking situation. Using the experience gained in this trial, we point out several things to consider that might help to avoid pitfalls when designing a more extensive study into the subject.
除了说话的内容外,参与谈话的人还用许多非语言的方式来表达自己。这意味着,即使人们不说话,他们也会相互交流,受到关注。在这个初步研究中,我们为一个三方互动的场景创建了一个实验设置,其中一个参与者在整个过程中保持沉默,而其中一个活跃的受试者的目光被跟踪。被眼球追踪的实验对象并不知道这个装置。试点研究只使用了两个测试对象,但结果为估计不说话参与者的行为和活动如何影响其他参与者的对话活动和情境本身提供了一些线索。我们还发现,说话者在话语开始时的凝视活动与在话语结束时的注视活动不同,这表明说话者对同伴的注意焦点随着轮到情况的不同而不同。利用在这次试验中获得的经验,我们指出了一些需要考虑的事情,这些事情可能有助于在设计对该主题进行更广泛的研究时避免陷阱。
{"title":"Visual interaction and conversational activity","authors":"Andres Levitski, J. Radun, Kristiina Jokinen","doi":"10.1145/2401836.2401847","DOIUrl":"https://doi.org/10.1145/2401836.2401847","url":null,"abstract":"In addition to the contents of their speech, people who are engaged in a conversation express themselves in many nonverbal ways. This means that people interact and are attended to even when they are not speaking. In this pilot study, we created an experimental setup for a three-party interactive situation where one of the participants remained silent throughout the session, and the gaze of one of the active subjects was tracked. The eye-tracked subject was unaware of the setup. The pilot study used only two test subjects, but the results provide some clues towards estimating how the behavior and activity of the non-speaking participant might affect other participants' conversational activity and the situation itself. We also found that the speaker's gaze activity is different in the beginning of the utterance than at the end of the utterance, indicating that the speaker's focus of attention towards the partner differs depending on the turn taking situation. Using the experience gained in this trial, we point out several things to consider that might help to avoid pitfalls when designing a more extensive study into the subject.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114257658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Move it there, or not?: the design of voice commands for gaze with speech 移到那里,还是不移?:语音凝视的语音指令设计
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401848
Monika Elepfandt, Martin Grund
This paper presents an experiment that was conducted to investigate gaze combined with voice commands. There has been very little research about the design of voice commands for this kind of input. It is not known yet if users prefer longer sentences like in natural dialogues or short commands. In the experiment three different voice commands are compared during a simple task in which participants had to drag & drop, rotate, and resize objects. It turned out that the shortness of a voice command -- in terms of number of words -- is more important than it being absolutely natural. Participants preferred the voice command with the fewest words and the fewest syllables. For the voice commands which had the same number of syllables, the users also preferred the one with the fewest words, even though there were no big differences in time and errors.
本文提出了一项研究注视与语音指令结合的实验。关于这种输入的语音命令设计的研究很少。目前还不清楚用户是喜欢像自然对话那样的长句子,还是喜欢简短的命令。在实验中,在一个简单的任务中,参与者必须拖放、旋转和调整物体的大小,实验中比较了三种不同的语音命令。事实证明,语音命令的简短性——就单词数量而言——比它绝对自然更重要。参与者更喜欢单词和音节最少的语音命令。对于音节数相同的语音命令,即使在时间和错误上没有太大差异,用户也更喜欢单词最少的语音命令。
{"title":"Move it there, or not?: the design of voice commands for gaze with speech","authors":"Monika Elepfandt, Martin Grund","doi":"10.1145/2401836.2401848","DOIUrl":"https://doi.org/10.1145/2401836.2401848","url":null,"abstract":"This paper presents an experiment that was conducted to investigate gaze combined with voice commands. There has been very little research about the design of voice commands for this kind of input. It is not known yet if users prefer longer sentences like in natural dialogues or short commands. In the experiment three different voice commands are compared during a simple task in which participants had to drag & drop, rotate, and resize objects. It turned out that the shortness of a voice command -- in terms of number of words -- is more important than it being absolutely natural. Participants preferred the voice command with the fewest words and the fewest syllables. For the voice commands which had the same number of syllables, the users also preferred the one with the fewest words, even though there were no big differences in time and errors.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133543592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A communication support interface based on learning awareness for collaborative learning 基于学习感知的协作学习通信支持接口
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401854
Yuki Hayashi, T. Kojiri, Toyohide Watanabe
The development of information communication technologies allows learners to study together with others through networks. To realize successful collaborative learning in such distributed environments, supporting their communication is important because participants acquire their knowledge through exchanging utterances. To address this issue, this paper proposes a communication supporting interface for network-based remote collaborative learning. In order to utilize communication opportunities, it is desirable that participants can be aware of the information in collaborative learning environments, and feels the sense of togetherness with others. Our system facilitates three types of awareness for the interface: awareness of participants, that of utterances, and contribution to discussion. We believe our system facilitates communication among the participants in CSCL environment.
信息通信技术的发展使学习者可以通过网络与他人一起学习。为了在这种分布式环境中实现成功的协作学习,支持他们的交流是很重要的,因为参与者通过交换话语来获取知识。针对这一问题,本文提出了一种基于网络的远程协作学习的通信支持接口。为了利用交流机会,希望参与者能够在协作学习环境中了解信息,并感受到与他人在一起的感觉。我们的系统为界面提供了三种类型的意识:参与者的意识,话语的意识,以及对讨论的贡献。我们相信我们的系统有助于在CSCL环境中参与者之间的沟通。
{"title":"A communication support interface based on learning awareness for collaborative learning","authors":"Yuki Hayashi, T. Kojiri, Toyohide Watanabe","doi":"10.1145/2401836.2401854","DOIUrl":"https://doi.org/10.1145/2401836.2401854","url":null,"abstract":"The development of information communication technologies allows learners to study together with others through networks. To realize successful collaborative learning in such distributed environments, supporting their communication is important because participants acquire their knowledge through exchanging utterances. To address this issue, this paper proposes a communication supporting interface for network-based remote collaborative learning. In order to utilize communication opportunities, it is desirable that participants can be aware of the information in collaborative learning environments, and feels the sense of togetherness with others. Our system facilitates three types of awareness for the interface: awareness of participants, that of utterances, and contribution to discussion. We believe our system facilitates communication among the participants in CSCL environment.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128799193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic interpretation of eye movements using designed structures of displayed contents 利用所显示内容的设计结构对眼球运动进行语义解释
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401853
Erina Ishikawa, Ryo Yonetani, H. Kawashima, Takatsugu Hirayama, T. Matsuyama
This paper presents a novel framework to interpret eye movements using semantic relations and spatial layouts of displayed contents, i.e., the designed structure. We represent eye movements in a multi-scale, interval-based manner and associate them with various semantic relations derived from the designed structure. In preliminary experiments, we apply the proposed framework to the eye movements when browsing catalog contents, and confirm the effectiveness of the framework via user-state estimation.
本文提出了一种利用显示内容的语义关系和空间布局(即设计结构)来解释眼球运动的新框架。我们以多尺度、基于间隔的方式来表示眼动,并将它们与从设计结构中衍生出来的各种语义关系联系起来。在初步实验中,我们将提出的框架应用于浏览目录内容时的眼球运动,并通过用户状态估计验证了框架的有效性。
{"title":"Semantic interpretation of eye movements using designed structures of displayed contents","authors":"Erina Ishikawa, Ryo Yonetani, H. Kawashima, Takatsugu Hirayama, T. Matsuyama","doi":"10.1145/2401836.2401853","DOIUrl":"https://doi.org/10.1145/2401836.2401853","url":null,"abstract":"This paper presents a novel framework to interpret eye movements using semantic relations and spatial layouts of displayed contents, i.e., the designed structure. We represent eye movements in a multi-scale, interval-based manner and associate them with various semantic relations derived from the designed structure. In preliminary experiments, we apply the proposed framework to the eye movements when browsing catalog contents, and confirm the effectiveness of the framework via user-state estimation.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131168621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Hard lessons learned: mobile eye-tracking in cockpits 惨痛的教训:驾驶舱内的移动眼球追踪
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401843
Hana Vrzakova, R. Bednarik
Eye-tracking presents an attractive tool in testing of design alternatives in all stages of interface evaluation. Access to the operator's visual attention behaviors provides information supporting design decisions. While mobile eye-tracking increases ecological validity it also brings about numerous constraints. In this work, we discuss mobile eye-tracking issues in the complex environment of a business jet flight simulator in industrial research settings. The cockpit and low illumination directly limited the setup of the eye-tracker and quality of recordings and evaluations. Here we present lessons learned and the best practices in setting up the eye-tracker under challenging simulation conditions.
在界面评估的各个阶段,眼动追踪是测试设计备选方案的一种有吸引力的工具。获取操作人员的视觉注意行为提供了支持设计决策的信息。移动眼动追踪在提高生态有效性的同时,也带来了诸多限制。在这项工作中,我们讨论了在工业研究设置的公务机飞行模拟器的复杂环境下的移动眼动追踪问题。驾驶舱和低照度直接限制了眼动仪的设置以及记录和评估的质量。在这里,我们介绍了在具有挑战性的模拟条件下设置眼动仪的经验教训和最佳实践。
{"title":"Hard lessons learned: mobile eye-tracking in cockpits","authors":"Hana Vrzakova, R. Bednarik","doi":"10.1145/2401836.2401843","DOIUrl":"https://doi.org/10.1145/2401836.2401843","url":null,"abstract":"Eye-tracking presents an attractive tool in testing of design alternatives in all stages of interface evaluation. Access to the operator's visual attention behaviors provides information supporting design decisions. While mobile eye-tracking increases ecological validity it also brings about numerous constraints. In this work, we discuss mobile eye-tracking issues in the complex environment of a business jet flight simulator in industrial research settings. The cockpit and low illumination directly limited the setup of the eye-tracker and quality of recordings and evaluations. Here we present lessons learned and the best practices in setting up the eye-tracker under challenging simulation conditions.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129323964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
Gaze-In '12
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1