演示:多设备手势界面

Vu H. Tran, Youngki Lee, Archan Misra
{"title":"演示:多设备手势界面","authors":"Vu H. Tran, Youngki Lee, Archan Misra","doi":"10.1145/2938559.2938574","DOIUrl":null,"url":null,"abstract":"Varieties of wearable devices such as smart watches, Virtual/Augmented Reality devices (AR/VR) are much more affordable with interesting capabilities. In our vision, a person may use more than one devices at a time, and they form an eco-system of wearable devices. Therefore, we aim to build a system where an application expands its input and output among different devices, and adapts its input/output stream for different contexts. For example, a user wears a smart watch, a pair of smart glasses, and a smart phone in his pocket. Normally, the application on the mobile phone uses its touch screen as the input/output modality; but if the user put the mobile phone in his pocket, and wear the smart glasses, the application uses the gestures from smart watches as input, and the display of the smart glasses as output. Another advantage of such a multi-device system we want to support is multi-limb gesture. There is quite equal preference between one-handed and two-handed gestures [2]. Especially, two-handed gestures may have a potential use in VR/AR, and they provide a more natural input modality. However, there are three main challenges that need to be solved to achieve our goal. The first challenge is latency. For interactive applications, latency is crucial. For example, in virtual drumming application, what a user hears affect the timing of the next drum-hit. The second challenge is energy. It is well known that energy consumption is the bottle-neck of wearable devices. In an environment of multiple devices, energy consumption has to be optimized for all devices. We believe another challenge for such an multi-device environment is the ability of adaptation. It is even annoying to require the user to configure devices whenever the context changes, so the adaptability will be much more beneficial. For example, when the user start walking and wearing the smart glasses, the system automatically disables gesture control and shows the notification on the glasses. In multi-device system, the architecture is crucial for every device to work efficiently. Combining all data and process them in a central device forces the central device to stay in the system forever. Moreover, transmission of a large amount of data via bluetooth consumes quite much energy [1]. We therefore deploy a lightweight recognizer on each wearable device to recognize primitive gestures. Other devices can acquire these primitive gestures and fuse them into more complex gestures. For example, fusion of motion gestures from two devices, or fusion of motion gestures","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Demo: Multi-device Gestural Interfaces\",\"authors\":\"Vu H. Tran, Youngki Lee, Archan Misra\",\"doi\":\"10.1145/2938559.2938574\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Varieties of wearable devices such as smart watches, Virtual/Augmented Reality devices (AR/VR) are much more affordable with interesting capabilities. In our vision, a person may use more than one devices at a time, and they form an eco-system of wearable devices. Therefore, we aim to build a system where an application expands its input and output among different devices, and adapts its input/output stream for different contexts. For example, a user wears a smart watch, a pair of smart glasses, and a smart phone in his pocket. Normally, the application on the mobile phone uses its touch screen as the input/output modality; but if the user put the mobile phone in his pocket, and wear the smart glasses, the application uses the gestures from smart watches as input, and the display of the smart glasses as output. Another advantage of such a multi-device system we want to support is multi-limb gesture. There is quite equal preference between one-handed and two-handed gestures [2]. Especially, two-handed gestures may have a potential use in VR/AR, and they provide a more natural input modality. However, there are three main challenges that need to be solved to achieve our goal. The first challenge is latency. For interactive applications, latency is crucial. For example, in virtual drumming application, what a user hears affect the timing of the next drum-hit. The second challenge is energy. It is well known that energy consumption is the bottle-neck of wearable devices. In an environment of multiple devices, energy consumption has to be optimized for all devices. We believe another challenge for such an multi-device environment is the ability of adaptation. It is even annoying to require the user to configure devices whenever the context changes, so the adaptability will be much more beneficial. For example, when the user start walking and wearing the smart glasses, the system automatically disables gesture control and shows the notification on the glasses. In multi-device system, the architecture is crucial for every device to work efficiently. Combining all data and process them in a central device forces the central device to stay in the system forever. Moreover, transmission of a large amount of data via bluetooth consumes quite much energy [1]. We therefore deploy a lightweight recognizer on each wearable device to recognize primitive gestures. Other devices can acquire these primitive gestures and fuse them into more complex gestures. For example, fusion of motion gestures from two devices, or fusion of motion gestures\",\"PeriodicalId\":298684,\"journal\":{\"name\":\"MobiSys '16 Companion\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"MobiSys '16 Companion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2938559.2938574\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"MobiSys '16 Companion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2938559.2938574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

各种可穿戴设备,如智能手表、虚拟/增强现实设备(AR/VR)等,价格要便宜得多,而且功能有趣。在我们的愿景中,一个人可以同时使用多个设备,形成一个可穿戴设备的生态系统。因此,我们的目标是建立一个系统,在这个系统中,应用程序可以在不同的设备之间扩展其输入和输出,并根据不同的上下文调整其输入/输出流。例如,用户戴着智能手表,一副智能眼镜,口袋里装着智能手机。通常,手机上的应用程序使用其触摸屏作为输入/输出方式;但如果用户把手机放在口袋里,戴上智能眼镜,应用程序就会使用智能手表的手势作为输入,智能眼镜的显示作为输出。我们想要支持的这种多设备系统的另一个优点是多肢体手势。人们对单手和双手手势的偏好相当一致[2]。特别是,双手手势可能在VR/AR中有潜在的用途,它们提供了一种更自然的输入方式。然而,要实现我们的目标,有三个主要挑战需要解决。第一个挑战是延迟。对于交互式应用程序,延迟是至关重要的。例如,在虚拟击鼓应用程序中,用户听到的声音会影响下一次击鼓的时间。第二个挑战是能源。众所周知,能耗是可穿戴设备的瓶颈。在多设备的环境中,必须对所有设备的能耗进行优化。我们认为,这种多设备环境的另一个挑战是适应能力。每当上下文发生变化时,要求用户配置设备甚至很烦人,因此适应性将更加有益。例如,当用户开始行走并佩戴智能眼镜时,系统会自动关闭手势控制,并在眼镜上显示通知。在多设备系统中,体系结构是保证各设备高效工作的关键。将所有数据合并到一个中心设备中并对其进行处理,迫使中心设备永远留在系统中。此外,通过蓝牙传输大量数据需要消耗相当多的能量[1]。因此,我们在每个可穿戴设备上部署了一个轻量级识别器来识别原始手势。其他设备可以获取这些原始手势,并将它们融合成更复杂的手势。例如,两个设备的动作手势融合,或者动作手势的融合
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Demo: Multi-device Gestural Interfaces
Varieties of wearable devices such as smart watches, Virtual/Augmented Reality devices (AR/VR) are much more affordable with interesting capabilities. In our vision, a person may use more than one devices at a time, and they form an eco-system of wearable devices. Therefore, we aim to build a system where an application expands its input and output among different devices, and adapts its input/output stream for different contexts. For example, a user wears a smart watch, a pair of smart glasses, and a smart phone in his pocket. Normally, the application on the mobile phone uses its touch screen as the input/output modality; but if the user put the mobile phone in his pocket, and wear the smart glasses, the application uses the gestures from smart watches as input, and the display of the smart glasses as output. Another advantage of such a multi-device system we want to support is multi-limb gesture. There is quite equal preference between one-handed and two-handed gestures [2]. Especially, two-handed gestures may have a potential use in VR/AR, and they provide a more natural input modality. However, there are three main challenges that need to be solved to achieve our goal. The first challenge is latency. For interactive applications, latency is crucial. For example, in virtual drumming application, what a user hears affect the timing of the next drum-hit. The second challenge is energy. It is well known that energy consumption is the bottle-neck of wearable devices. In an environment of multiple devices, energy consumption has to be optimized for all devices. We believe another challenge for such an multi-device environment is the ability of adaptation. It is even annoying to require the user to configure devices whenever the context changes, so the adaptability will be much more beneficial. For example, when the user start walking and wearing the smart glasses, the system automatically disables gesture control and shows the notification on the glasses. In multi-device system, the architecture is crucial for every device to work efficiently. Combining all data and process them in a central device forces the central device to stay in the system forever. Moreover, transmission of a large amount of data via bluetooth consumes quite much energy [1]. We therefore deploy a lightweight recognizer on each wearable device to recognize primitive gestures. Other devices can acquire these primitive gestures and fuse them into more complex gestures. For example, fusion of motion gestures from two devices, or fusion of motion gestures
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Demo: Profiling Power Utilization Behaviours of Smartwatch Applications Poster: Index Structure for Spatial Keyword Query with Myanmar Language on the Mobile Devices Poster: Software Architecture for Efficiently Designing Cloud Applications using Node.js Poster: Discovery of Disappeared Node in Large Number of BLE Devices Environment Poster: Deep Learning Enabled M2M Gateway for Network Optimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1