首页 > 最新文献

Proceedings of the 2016 Symposium on Spatial User Interaction最新文献

英文 中文
Development of a Toolkit for Creating Kinetic Garments Based on Smart Hair Technology 基于智能毛发技术创建动态服装工具包的开发
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989182
Mage Xue, Masaru Ohkubo, Miki Yamamura, Hiroko Uchiyama, T. Nojima, Yael Friedman
Although there are many kinetic garment artworks and studies (ex [3]), installing kinetic elements into garments is often difficult for people in the field of textile. The reason for this issue comes from the complexity of kinetic elements to be handled by such people. Thus, simple technology is required to enable those people to create new kinetic garment easily. In this project, we propose a simple toolkit that enables installing kinetic function to textiles. This toolkit is composed of Smart Hair(s), a fine, light weighted bending actuator, and an Arduino based microcomputer. The basic design of the proposed toolkit will be described. Furthermore, we held a workshop in cooperation with students who major in fashion and textile to investigate the effect of this toolkit.
虽然有许多动态服装艺术作品和研究(如[3]),但在服装中安装动态元素对于纺织领域的人们来说往往是困难的。这一问题的原因在于这些人需要处理的动态元素的复杂性。因此,需要简单的技术,使这些人能够轻松地创造新的动态服装。在这个项目中,我们提出了一个简单的工具包,可以将动能功能安装到纺织品上。该工具包由Smart Hair(s)、一个精细、轻量级的弯曲驱动器和一个基于Arduino的微型计算机组成。本文将描述所建议的工具包的基本设计。此外,我们与时装和纺织专业的学生合作举办了一个研讨会,以调查该工具包的效果。
{"title":"Development of a Toolkit for Creating Kinetic Garments Based on Smart Hair Technology","authors":"Mage Xue, Masaru Ohkubo, Miki Yamamura, Hiroko Uchiyama, T. Nojima, Yael Friedman","doi":"10.1145/2983310.2989182","DOIUrl":"https://doi.org/10.1145/2983310.2989182","url":null,"abstract":"Although there are many kinetic garment artworks and studies (ex [3]), installing kinetic elements into garments is often difficult for people in the field of textile. The reason for this issue comes from the complexity of kinetic elements to be handled by such people. Thus, simple technology is required to enable those people to create new kinetic garment easily. In this project, we propose a simple toolkit that enables installing kinetic function to textiles. This toolkit is composed of Smart Hair(s), a fine, light weighted bending actuator, and an Arduino based microcomputer. The basic design of the proposed toolkit will be described. Furthermore, we held a workshop in cooperation with students who major in fashion and textile to investigate the effect of this toolkit.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115487750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
TickTockRay: Smartwatch Raycasting for Mobile HMDs TickTockRay:智能手表无线广播移动头戴式显示器
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989184
Krzysztof Pietroszek, D. Kharlamov
We present TickTockRay, a smartwatch-based 3D pointing technique for smartphone-based immersive environments. Our work demonstrates that smartwatch-based raycasting may be a practical alternative to head-rotation-based pointing or specialized input devices. We release TickTockRay as an open-source plugin for Unity and provide an example of its use in a virtual reality clone of the Minecraft game.
我们介绍TickTockRay,一种基于智能手表的3D指向技术,用于基于智能手机的沉浸式环境。我们的研究表明,基于智能手表的光线投射可能是基于头部旋转的指向或专门输入设备的实用替代方案。我们发布了TickTockRay作为Unity的开源插件,并提供了它在Minecraft游戏的虚拟现实克隆中的使用示例。
{"title":"TickTockRay: Smartwatch Raycasting for Mobile HMDs","authors":"Krzysztof Pietroszek, D. Kharlamov","doi":"10.1145/2983310.2989184","DOIUrl":"https://doi.org/10.1145/2983310.2989184","url":null,"abstract":"We present TickTockRay, a smartwatch-based 3D pointing technique for smartphone-based immersive environments. Our work demonstrates that smartwatch-based raycasting may be a practical alternative to head-rotation-based pointing or specialized input devices. We release TickTockRay as an open-source plugin for Unity and provide an example of its use in a virtual reality clone of the Minecraft game.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124709493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Shift-Sliding and Depth-Pop for 3D Positioning 用于3D定位的shift -滑动和Depth-Pop
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2991067
Junwei Sun, W. Stuerzlinger, Dmitri Shuralyov
We introduce two new 3D positioning methods. The techniques enable rapid, yet easy-to-use positioning of objects in 3D scenes. With SHIFT-Sliding, the user can override the default assumption of contact and non-collision for sliding, and lift objects into the air or make them collide with other objects. DEPTH-POP maps mouse wheel actions to all object positions along the mouse ray, where the object meets the default assumptions for sliding. We will demonstrate the two methods in a desktop environment with the mouse and keyboard as interaction devices. Both methods use frame buffer techniques for efficiency.
本文介绍了两种新的三维定位方法。该技术能够在3D场景中快速且易于使用地定位物体。使用SHIFT-Sliding,用户可以覆盖滑动时接触和不碰撞的默认假设,并将物体提升到空中或使其与其他物体碰撞。DEPTH-POP将鼠标滚轮动作映射到沿着鼠标射线的所有对象位置,对象满足滑动的默认假设。我们将在使用鼠标和键盘作为交互设备的桌面环境中演示这两种方法。这两种方法都使用帧缓冲技术来提高效率。
{"title":"Shift-Sliding and Depth-Pop for 3D Positioning","authors":"Junwei Sun, W. Stuerzlinger, Dmitri Shuralyov","doi":"10.1145/2983310.2991067","DOIUrl":"https://doi.org/10.1145/2983310.2991067","url":null,"abstract":"We introduce two new 3D positioning methods. The techniques enable rapid, yet easy-to-use positioning of objects in 3D scenes. With SHIFT-Sliding, the user can override the default assumption of contact and non-collision for sliding, and lift objects into the air or make them collide with other objects. DEPTH-POP maps mouse wheel actions to all object positions along the mouse ray, where the object meets the default assumptions for sliding. We will demonstrate the two methods in a desktop environment with the mouse and keyboard as interaction devices. Both methods use frame buffer techniques for efficiency.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"444 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123620871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
AR Tabletop Interface Using an Optical See-Through HMD 使用光学透明头戴式显示器的AR桌面接口
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989180
Nozomi Sugiura, T. Komuro
We propose a user interface that superimposes a virtual touch panel on a flat surface using an optical see-through head-mounted display and an RGB-D camera. The user can use the interface in a hands-free state, and can perform the operation with both hands. The interface performs markerless superimposition of virtual objects on a real scene. In addition, the interface can recognize three-dimensional information of the user's fingers, allowing the user to operate with the virtual touch panel. We developed some applications in which the user can perform various operations on the virtual touch panel.
我们提出了一种用户界面,该界面使用光学透明头戴式显示器和RGB-D相机在平面上叠加虚拟触摸面板。用户可以在免提状态下使用界面,可以双手操作。该接口在真实场景中执行虚拟对象的无标记叠加。此外,该界面可以识别用户手指的三维信息,允许用户使用虚拟触摸面板进行操作。我们开发了一些应用程序,用户可以在虚拟触摸面板上执行各种操作。
{"title":"AR Tabletop Interface Using an Optical See-Through HMD","authors":"Nozomi Sugiura, T. Komuro","doi":"10.1145/2983310.2989180","DOIUrl":"https://doi.org/10.1145/2983310.2989180","url":null,"abstract":"We propose a user interface that superimposes a virtual touch panel on a flat surface using an optical see-through head-mounted display and an RGB-D camera. The user can use the interface in a hands-free state, and can perform the operation with both hands. The interface performs markerless superimposition of virtual objects on a real scene. In addition, the interface can recognize three-dimensional information of the user's fingers, allowing the user to operate with the virtual touch panel. We developed some applications in which the user can perform various operations on the virtual touch panel.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127597605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interacting with Maps on Optical Head-Mounted Displays 在光学头戴式显示器上与地图交互
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2985747
D. Rudi, I. Giannopoulos, P. Kiefer, Christian Peier, M. Raubal
This paper explores the design space for interacting with maps on Optical (See-Through) Head-Mounted Displays (OHMDs). The resulting interactions were evaluated in a comprehensive experiment involving 31 participants. More precisely, novel head-based interactions were compared with well-known haptic interactions on an OHMD regarding efficiency, effectiveness, user experience and perceived cognitive workload. The tasks involved navigating on maps by panning, zooming, and both panning and zooming. The results suggest that interaction methods exploiting congruent spatial relationships, i.e., mappings between the same axis in the control and display space, outperform others. In particular, the head-based interactions incorporating such mappings, significantly outperformed the haptic interactions for tasks involving panning, and combined tasks of panning and zooming.
本文探讨了在光学(透明)头戴式显示器(ohmd)上与地图交互的设计空间。在一项涉及31名参与者的综合实验中评估了由此产生的相互作用。更准确地说,在OHMD上比较了新颖的基于头部的交互与已知的触觉交互在效率、有效性、用户体验和感知认知工作量方面的差异。这些任务包括通过平移、缩放以及平移和缩放来在地图上导航。结果表明,利用同余空间关系的交互方法,即控制空间和显示空间中同一轴之间的映射,优于其他交互方法。特别是,包含这种映射的基于头部的交互在涉及平移以及平移和缩放的组合任务中的表现明显优于触觉交互。
{"title":"Interacting with Maps on Optical Head-Mounted Displays","authors":"D. Rudi, I. Giannopoulos, P. Kiefer, Christian Peier, M. Raubal","doi":"10.1145/2983310.2985747","DOIUrl":"https://doi.org/10.1145/2983310.2985747","url":null,"abstract":"This paper explores the design space for interacting with maps on Optical (See-Through) Head-Mounted Displays (OHMDs). The resulting interactions were evaluated in a comprehensive experiment involving 31 participants. More precisely, novel head-based interactions were compared with well-known haptic interactions on an OHMD regarding efficiency, effectiveness, user experience and perceived cognitive workload. The tasks involved navigating on maps by panning, zooming, and both panning and zooming. The results suggest that interaction methods exploiting congruent spatial relationships, i.e., mappings between the same axis in the control and display space, outperform others. In particular, the head-based interactions incorporating such mappings, significantly outperformed the haptic interactions for tasks involving panning, and combined tasks of panning and zooming.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134345062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Session details: Interaction II 会话细节:交互II
Pub Date : 2016-10-15 DOI: 10.1145/3248574
K. Johnsen
{"title":"Session details: Interaction II","authors":"K. Johnsen","doi":"10.1145/3248574","DOIUrl":"https://doi.org/10.1145/3248574","url":null,"abstract":"","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131218654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Providing Assistance for Orienting 3D Objects Using Monocular Eyewear 使用单目眼镜为定向3D物体提供帮助
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2985764
Mengu Sukan, Carmine Elvezio, Steven K. Feiner, B. Tversky
Many tasks require that a user rotate an object to match a specific orientation in an external coordinate system. This includes tasks in which one object must be oriented relative to a second prior to assembly and tasks in which objects must be held in specific ways to inspect them. Research has investigated guidance mechanisms for some 6DOF tasks, using wide--field-of-view, stereoscopic virtual and augmented reality head-worn displays (HWDs). However, there has been relatively little work directed toward smaller field-of-view lightweight monoscopic HWDs, such as Google Glass, which may remain more comfortable and less intrusive than stereoscopic HWDs in the near future. We have designed and implemented a novel visualization approach and three additional visualizations representing different paradigms for guiding unconstrained manual 3DOF rotation, targeting these monoscopic HWDs. We describe our exploration of these paradigms and present the results of a user study evaluating the relative performance of the visualizations and showing the advantages of our new approach.
许多任务要求用户旋转对象以匹配外部坐标系中的特定方向。这包括在组装之前必须将一个对象相对于另一个对象定向的任务,以及必须以特定方式保持对象以检查它们的任务。研究人员利用宽视场、立体虚拟和增强现实头戴式显示器(HWDs)研究了一些6DOF任务的制导机制。然而,针对更小视场的轻型单视角高清设备(如Google Glass)的研究相对较少,在不久的将来,这种设备可能会比立体高清设备更舒适、更少干扰。我们设计并实现了一种新颖的可视化方法和另外三种可视化方法,代表了不同的范例,用于指导无约束的手动3DOF旋转,针对这些单镜hwd。我们描述了我们对这些范例的探索,并展示了一项用户研究的结果,该研究评估了可视化的相对性能,并展示了我们新方法的优势。
{"title":"Providing Assistance for Orienting 3D Objects Using Monocular Eyewear","authors":"Mengu Sukan, Carmine Elvezio, Steven K. Feiner, B. Tversky","doi":"10.1145/2983310.2985764","DOIUrl":"https://doi.org/10.1145/2983310.2985764","url":null,"abstract":"Many tasks require that a user rotate an object to match a specific orientation in an external coordinate system. This includes tasks in which one object must be oriented relative to a second prior to assembly and tasks in which objects must be held in specific ways to inspect them. Research has investigated guidance mechanisms for some 6DOF tasks, using wide--field-of-view, stereoscopic virtual and augmented reality head-worn displays (HWDs). However, there has been relatively little work directed toward smaller field-of-view lightweight monoscopic HWDs, such as Google Glass, which may remain more comfortable and less intrusive than stereoscopic HWDs in the near future. We have designed and implemented a novel visualization approach and three additional visualizations representing different paradigms for guiding unconstrained manual 3DOF rotation, targeting these monoscopic HWDs. We describe our exploration of these paradigms and present the results of a user study evaluating the relative performance of the visualizations and showing the advantages of our new approach.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133576525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Fast and Accurate 3D Selection using Proxy with Spatial Relationship for Immersive Virtual Environments 基于空间关系代理的沉浸式虚拟环境快速准确3D选择
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989200
Jun Lee, Ji-Hyung Park, J. Oh, J. Lee
In this paper, we propose a fast and accurate 3D selection method using visualization of proxies with spatial relationships. The proposed 3D selection method reduced selection errors and selection time comparing to conventional ray-casting method.
在本文中,我们提出了一种基于空间关系的代理可视化的快速、准确的三维选择方法。与传统的光线投射方法相比,该方法减少了选择误差和选择时间。
{"title":"Fast and Accurate 3D Selection using Proxy with Spatial Relationship for Immersive Virtual Environments","authors":"Jun Lee, Ji-Hyung Park, J. Oh, J. Lee","doi":"10.1145/2983310.2989200","DOIUrl":"https://doi.org/10.1145/2983310.2989200","url":null,"abstract":"In this paper, we propose a fast and accurate 3D selection method using visualization of proxies with spatial relationships. The proposed 3D selection method reduced selection errors and selection time comparing to conventional ray-casting method.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116480442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Social Spatial Mashup for Place and Object - based Information Sharing 基于地点和对象的信息共享的社会空间混搭
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989186
Choonsung Shin, Youngmin Kim, Jisoo Hong, Sunghee Hong, Hoonjong Kang
In this paper, we describe social spatial mashup for information sharing in public space. The proposed social spatial mashup is based on RGB-D SLAM and creates a 3D feature map and allows users to locate information and contents in a 3D space. It thus allows users intuitively and spatially share information among users based on real objects and 3D space. We also implemented and tested the mashup method with Google Project Tango.
本文描述了面向公共空间信息共享的社会空间混搭。提出的社交空间mashup基于RGB-D SLAM,创建3D特征图,允许用户在3D空间中定位信息和内容。从而使用户能够基于真实物体和三维空间,直观地、空间地在用户之间共享信息。我们还在Google Project Tango中实现并测试了mashup方法。
{"title":"Social Spatial Mashup for Place and Object - based Information Sharing","authors":"Choonsung Shin, Youngmin Kim, Jisoo Hong, Sunghee Hong, Hoonjong Kang","doi":"10.1145/2983310.2989186","DOIUrl":"https://doi.org/10.1145/2983310.2989186","url":null,"abstract":"In this paper, we describe social spatial mashup for information sharing in public space. The proposed social spatial mashup is based on RGB-D SLAM and creates a 3D feature map and allows users to locate information and contents in a 3D space. It thus allows users intuitively and spatially share information among users based on real objects and 3D space. We also implemented and tested the mashup method with Google Project Tango.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122150670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Developing Interoperable Experiences with OpenUIX 使用openunix开发可互操作的体验
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989203
Mikel Salazar, Carlos Laorden
In this demo, we present a framework that aims to provide UI designers (and end-users) with a simple but powerful language with which easily create, modify and share advanced interaction spaces. A UI description language that takes into consideration the context of the users not only to adapt the contents of the SUI to their real needs and desires, but also to allow them to automatically discover new and meaningful experiences as they go about their daily lives.
在这个演示中,我们展示了一个框架,旨在为UI设计人员(和最终用户)提供一个简单但功能强大的语言,可以轻松地创建、修改和共享高级交互空间。一种UI描述语言,它考虑到用户的上下文,不仅使SUI的内容适应他们的实际需求和愿望,而且允许他们在日常生活中自动发现新的和有意义的体验。
{"title":"Developing Interoperable Experiences with OpenUIX","authors":"Mikel Salazar, Carlos Laorden","doi":"10.1145/2983310.2989203","DOIUrl":"https://doi.org/10.1145/2983310.2989203","url":null,"abstract":"In this demo, we present a framework that aims to provide UI designers (and end-users) with a simple but powerful language with which easily create, modify and share advanced interaction spaces. A UI description language that takes into consideration the context of the users not only to adapt the contents of the SUI to their real needs and desires, but also to allow them to automatically discover new and meaningful experiences as they go about their daily lives.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131729879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2016 Symposium on Spatial User Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1