Effect of Grasping Uniformity on Estimation of Grasping Region from Gaze Data

Pimwalun Witchawanitchanun, Zeynep Yücel, Akito Monden, P. Leelaprute
{"title":"Effect of Grasping Uniformity on Estimation of Grasping Region from Gaze Data","authors":"Pimwalun Witchawanitchanun, Zeynep Yücel, Akito Monden, P. Leelaprute","doi":"10.1145/3349537.3352787","DOIUrl":null,"url":null,"abstract":"This study explores estimation of grasping region of objects from gaze data. Our study distinguishes from previous works by accounting for \"grasping uniformity\" of the objects. In particular, we consider three types of graspable objects: (i) with a well-defined graspable part (e.g. handle), (ii) without a grip but with an intuitive grasping region, (iii) without any grip or intuitive grasping region. We assume that these types define how \"uniform\" grasping region is across different graspers. In experiments, we use \"Learning to grasp\" data set and apply the method of [Pramot et al. 2018] for estimating grasping region from gaze data. We compute similarity of estimations and ground truth annotations for the three types of objects regarding subjects (a) who perform free viewing and (b) who view the images with the intention of grasping. In line with many previous studies, similarity is found to be higher for non-graspers. An interesting finding is that the difference in similarity (between free viewing and motivated to grasp) is higher for type-iii objects; and comparable for type-i and ii objects. Based on this, we believe that estimation of grasping region from gaze data offers a larger potential to \"learn\" particularly grasping of type-iii objects.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Conference on Human-Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3349537.3352787","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

This study explores estimation of grasping region of objects from gaze data. Our study distinguishes from previous works by accounting for "grasping uniformity" of the objects. In particular, we consider three types of graspable objects: (i) with a well-defined graspable part (e.g. handle), (ii) without a grip but with an intuitive grasping region, (iii) without any grip or intuitive grasping region. We assume that these types define how "uniform" grasping region is across different graspers. In experiments, we use "Learning to grasp" data set and apply the method of [Pramot et al. 2018] for estimating grasping region from gaze data. We compute similarity of estimations and ground truth annotations for the three types of objects regarding subjects (a) who perform free viewing and (b) who view the images with the intention of grasping. In line with many previous studies, similarity is found to be higher for non-graspers. An interesting finding is that the difference in similarity (between free viewing and motivated to grasp) is higher for type-iii objects; and comparable for type-i and ii objects. Based on this, we believe that estimation of grasping region from gaze data offers a larger potential to "learn" particularly grasping of type-iii objects.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
抓取均匀性对凝视数据抓取区域估计的影响
本研究探讨了从凝视数据中估计物体的抓取区域。我们的研究区别于以往的工作,考虑到物体的“抓握均匀性”。特别地,我们考虑了三种类型的可抓取对象:(i)具有明确定义的可抓取部分(例如手柄),(ii)没有手柄但具有直观抓取区域,(iii)没有任何手柄或直观抓取区域。我们假设这些类型定义了不同抓取器之间的“均匀”抓取区域。在实验中,我们使用“学习抓取”数据集,并应用[Pramot et al. 2018]的方法从凝视数据中估计抓取区域。我们计算了三种类型对象的相似度估计和地面真值注释,涉及受试者(a)执行自由观看和(b)以抓取意图观看图像。与之前的许多研究一致,发现非抓取者的相似性更高。一个有趣的发现是,第三类物体的相似性差异(在自由观看和主动抓取之间)更大;对于i型和ii型对象是可比较的。基于此,我们认为从凝视数据中估计抓取区域提供了更大的“学习”潜力,特别是对iii类物体的抓取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Design Method of the Virtual Teacher Are We Having Fun Yet?: Designing for Fun in Artificial Intelligence That Is Multicultural and Multiplatform A Conversational Robotic Approach to Dementia Symptoms: Measuring Its Effect on Older Adults Let Me Get To Know You Better: Can Interactions Help to Overcome Uncanny Feelings? Factors Influencing Empathic Behaviors for Virtual Agents: -Examining about the Effect of Embodiment-
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1