Workshop on object recognition for input and mobile interaction

H. Yeo, Gierad Laput, N. Gillian, A. Quigley
{"title":"Workshop on object recognition for input and mobile interaction","authors":"H. Yeo, Gierad Laput, N. Gillian, A. Quigley","doi":"10.1145/3098279.3119839","DOIUrl":null,"url":null,"abstract":"Today we can see an increasing number of object recognition systems of very different sizes, portability, embedability and form factors which are starting to become part of the ubiquitous, tangible, mobile and wearable computing ecosystems that we might make use of in our daily lives. These systems rely on a variety of technologies including computer vision, radar, acoustic sensing, tagging and smart objects. Such systems open up a wide-range of new forms of touchless and mobile interaction. With systems deployed in mobile products then using everyday objects that can be found in the office or home, we can realise new applications and novel types of interaction. Object based interactions might revolutionise how people interact with a computer. System could be used in conjunction with a mobile phone, for example it could be trained to open a recipe app when you hold a phone to your stomach, or change its settings when operating with a gloved hand. Although the last few years have seen an increasing amount of research in this area, knowledge about this subject remains under explored, fragmented, and cuts across a set of related but heterogeneous issues. This workshop brings together researchers and practitioners interested in the challenges posed by Object Recognition for Input and Mobile Interaction.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3098279.3119839","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Today we can see an increasing number of object recognition systems of very different sizes, portability, embedability and form factors which are starting to become part of the ubiquitous, tangible, mobile and wearable computing ecosystems that we might make use of in our daily lives. These systems rely on a variety of technologies including computer vision, radar, acoustic sensing, tagging and smart objects. Such systems open up a wide-range of new forms of touchless and mobile interaction. With systems deployed in mobile products then using everyday objects that can be found in the office or home, we can realise new applications and novel types of interaction. Object based interactions might revolutionise how people interact with a computer. System could be used in conjunction with a mobile phone, for example it could be trained to open a recipe app when you hold a phone to your stomach, or change its settings when operating with a gloved hand. Although the last few years have seen an increasing amount of research in this area, knowledge about this subject remains under explored, fragmented, and cuts across a set of related but heterogeneous issues. This workshop brings together researchers and practitioners interested in the challenges posed by Object Recognition for Input and Mobile Interaction.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
输入和移动交互的对象识别研讨会
今天,我们可以看到越来越多的物体识别系统,它们的大小、便携性、可嵌入性和外形因素各不相同,它们开始成为无处不在、有形、移动和可穿戴的计算生态系统的一部分,我们可能会在日常生活中使用它们。这些系统依赖于各种技术,包括计算机视觉、雷达、声学传感、标签和智能物体。这样的系统开辟了广泛的新形式的非触摸和移动交互。随着系统部署在移动产品中,然后使用办公室或家中的日常物品,我们可以实现新的应用程序和新颖的交互类型。基于对象的交互可能会彻底改变人与计算机的交互方式。该系统可以与手机配合使用,例如,当你把手机放在肚子上时,它可以被训练成打开食谱应用程序,或者在戴着手套的手操作时改变它的设置。尽管在过去的几年里,这一领域的研究越来越多,但关于这一主题的知识仍然没有得到充分的探索,支离破碎,并且跨越了一系列相关但异构的问题。本次研讨会汇集了对输入和移动交互的对象识别所带来的挑战感兴趣的研究人员和实践者。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Improving pocket paint usability via material design compliance and internationalization & localization support on application level CapSoles: who is walking on what kind of floor? Exploring the feasibility of subliminal priming on smartphones Visual, auditory and haptic navigation feedbacks among older pedestrians Usability of different types of commercial selfie sticks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1