首页 > 最新文献

Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
Wearable Kinesthetic I/O Device for Sharing Muscle Compliance 共享肌肉顺应性的可穿戴动觉I/O设备
Jun Nishida, Kenji Suzuki
In this paper, we present a wearable kinesthetic I/O device, which is able to measure and intervene in multiple muscle activities simultaneously through the same electrodes. The developed system includes an I/O module, capable of measuring the electromyogram (EMG) of four muscle tissues, while applying electrical muscle stimulation (EMS) at the same time. The developed wearable system is configured in a scalable manner for achieving 1) high stimulus frequency (up to 70 Hz), 2) wearable dimensions in which the device can be placed along the limbs, and 3) flexibility of the number of I/O electrodes (up to 32 channels). In a pilot user study, which shared the wrist compliance between two persons, participants were able to recognize the level of their confederate's wrist joint compliance using a 4-point Likert scale. The developed system would benefit a physical therapist and a patient, during hand rehabilitation, using a peg board for sharing their wrist compliance and grip force, which are usually difficult to be observed in a visual contact.
在本文中,我们提出了一种可穿戴的动觉I/O设备,它能够通过相同的电极同时测量和干预多种肌肉活动。所开发的系统包括一个I/O模块,能够测量四个肌肉组织的肌电图(EMG),同时应用肌肉电刺激(EMS)。开发的可穿戴系统以可扩展的方式配置,以实现1)高刺激频率(高达70 Hz), 2)可穿戴尺寸,设备可以沿着四肢放置,以及3)I/O电极数量的灵活性(多达32个通道)。在一项试点用户研究中,参与者分享了两个人的手腕依从性,参与者能够使用4点李克特量表识别他们同伴的手腕关节依从性水平。开发的系统将有利于物理治疗师和患者,在手部康复期间,使用一个钉板来分享他们的手腕顺应性和握力,这通常很难在视觉接触中观察到。
{"title":"Wearable Kinesthetic I/O Device for Sharing Muscle Compliance","authors":"Jun Nishida, Kenji Suzuki","doi":"10.1145/3266037.3266100","DOIUrl":"https://doi.org/10.1145/3266037.3266100","url":null,"abstract":"In this paper, we present a wearable kinesthetic I/O device, which is able to measure and intervene in multiple muscle activities simultaneously through the same electrodes. The developed system includes an I/O module, capable of measuring the electromyogram (EMG) of four muscle tissues, while applying electrical muscle stimulation (EMS) at the same time. The developed wearable system is configured in a scalable manner for achieving 1) high stimulus frequency (up to 70 Hz), 2) wearable dimensions in which the device can be placed along the limbs, and 3) flexibility of the number of I/O electrodes (up to 32 channels). In a pilot user study, which shared the wrist compliance between two persons, participants were able to recognize the level of their confederate's wrist joint compliance using a 4-point Likert scale. The developed system would benefit a physical therapist and a patient, during hand rehabilitation, using a peg board for sharing their wrist compliance and grip force, which are usually difficult to be observed in a visual contact.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124977867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Interactive Pipeline for Creating Visual Blends 用于创建视觉混合的交互式管道
Lydia B. Chilton, S. Petridis, Maneesh Agrawala
Visual blends are an advanced graphic design technique to draw users' attention to a message. They blend together two objects in a way that is novel and useful in conveying a message symbolically. This demo presents an interactive pipeline for creating visual blends that follows the iterative design process. Our pipeline decomposes the process into both computational techniques and human microtasks. It allows users to collaboratively generate visual blends with steps involving brainstorming, synthesis, and iteration. Our demo allows individual users to see how existing visual blends were made, edit or improve existing visual blends, and create new visual blends.
视觉融合是一种先进的图形设计技术,可以吸引用户的注意力。他们以一种新颖而有用的方式将两个物体混合在一起,象征性地传达信息。这个演示展示了一个交互式管道,用于创建遵循迭代设计过程的视觉混合。我们的流水线将流程分解为计算技术和人工微任务。它允许用户通过头脑风暴、综合和迭代等步骤协同生成视觉混合。我们的演示允许个人用户查看现有的视觉混合是如何制作的,编辑或改进现有的视觉混合,并创建新的视觉混合。
{"title":"An Interactive Pipeline for Creating Visual Blends","authors":"Lydia B. Chilton, S. Petridis, Maneesh Agrawala","doi":"10.1145/3266037.3271646","DOIUrl":"https://doi.org/10.1145/3266037.3271646","url":null,"abstract":"Visual blends are an advanced graphic design technique to draw users' attention to a message. They blend together two objects in a way that is novel and useful in conveying a message symbolically. This demo presents an interactive pipeline for creating visual blends that follows the iterative design process. Our pipeline decomposes the process into both computational techniques and human microtasks. It allows users to collaboratively generate visual blends with steps involving brainstorming, synthesis, and iteration. Our demo allows individual users to see how existing visual blends were made, edit or improve existing visual blends, and create new visual blends.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127910565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Tangrami: Rapid Prototyping with Modular Paper-folded Electronics 互动彩绘:快速原型与模块化纸折叠电子
Michael Wessely, Nadiya Morenko, Jürgen Steimle, M. Schmitz
Prototyping interactive objects with personal fabrication tools like 3D printers requires the maker to create subsequent design artifacts from scratch which produces unnecessary waste and does not allow to reuse functional components. We present Interactive Tangrami, paper-folded and reusable building blocks (Tangramis) that can contain various sensor input and visual output capabilities. We propose a digital design toolkit that lets the user plan the shape and functionality of a design piece. The software manages the communication to the physical artifact and streams the interaction data via the Open Sound protocol (OSC) to an application prototyping system (e.g. MaxMSP). The building blocks are fabricated digitally with a rapid and inexpensive ink-jet printing method. Our systems allows to prototype physical user interfaces within minutes and without knowledge of the underlying technologies. We demo its usefulness with two application examples.
使用个人制造工具(如3D打印机)对交互式对象进行原型设计需要制造商从头开始创建后续设计工件,这会产生不必要的浪费,并且不允许重用功能组件。我们展示了交互式七巧板,纸折叠和可重复使用的构建块(七巧板),可以包含各种传感器输入和视觉输出功能。我们提出了一个数字设计工具包,让用户规划设计作品的形状和功能。软件管理与物理工件的通信,并通过开放声音协议(OSC)将交互数据流传输到应用原型系统(例如MaxMSP)。构建模块是用快速和廉价的喷墨印刷方法数字化制造的。我们的系统可以在几分钟内完成物理用户界面的原型,而无需了解底层技术。我们通过两个应用程序示例来演示它的有用性。
{"title":"Interactive Tangrami: Rapid Prototyping with Modular Paper-folded Electronics","authors":"Michael Wessely, Nadiya Morenko, Jürgen Steimle, M. Schmitz","doi":"10.1145/3266037.3271630","DOIUrl":"https://doi.org/10.1145/3266037.3271630","url":null,"abstract":"Prototyping interactive objects with personal fabrication tools like 3D printers requires the maker to create subsequent design artifacts from scratch which produces unnecessary waste and does not allow to reuse functional components. We present Interactive Tangrami, paper-folded and reusable building blocks (Tangramis) that can contain various sensor input and visual output capabilities. We propose a digital design toolkit that lets the user plan the shape and functionality of a design piece. The software manages the communication to the physical artifact and streams the interaction data via the Open Sound protocol (OSC) to an application prototyping system (e.g. MaxMSP). The building blocks are fabricated digitally with a rapid and inexpensive ink-jet printing method. Our systems allows to prototype physical user interfaces within minutes and without knowledge of the underlying technologies. We demo its usefulness with two application examples.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116952470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Sense.Seat: Inducing Improved Mood and Cognition through Multisensorial Priming 有意义的。座位:通过多感觉启动诱导改善情绪和认知
Pedro F. Campos, Diogo Cabral, Frederica Gonçalves
User interface software and technologies have been evolving significantly and rapidly. This poster presents a breakthrough user experience that leverages multisensorial priming and embedded interaction and introduces an interactive piece of furniture called Sense.Seat. Sensory stimuli such as calm colors, lavender and other scents as well as ambient soundscapes have been traditionally used to spark creativity and promote well-being. Sense.Seat is the first computational multisensorial seat that can be digitally controlled and vary the frequency and intensity of visual, auditory and olfactory stimulus. It is a new user interface shaped as a seat or pod that primes the user for inducing improved mood and cognition, therefore improving the work environment.
用户界面软件和技术一直在显著而迅速地发展。这张海报展示了一种突破性的用户体验,它利用了多感官启动和嵌入式交互,并介绍了一种名为Sense.Seat的交互式家具。感官刺激,如平静的颜色、薰衣草和其他气味,以及环境声景,传统上被用来激发创造力和促进幸福感。有意义的。Seat是第一款可计算的多感官座椅,可以通过数字控制,改变视觉、听觉和嗅觉刺激的频率和强度。这是一个新的用户界面,形状像一个座位或吊舱,为用户诱导改善情绪和认知,从而改善工作环境。
{"title":"Sense.Seat: Inducing Improved Mood and Cognition through Multisensorial Priming","authors":"Pedro F. Campos, Diogo Cabral, Frederica Gonçalves","doi":"10.1145/3266037.3266105","DOIUrl":"https://doi.org/10.1145/3266037.3266105","url":null,"abstract":"User interface software and technologies have been evolving significantly and rapidly. This poster presents a breakthrough user experience that leverages multisensorial priming and embedded interaction and introduces an interactive piece of furniture called Sense.Seat. Sensory stimuli such as calm colors, lavender and other scents as well as ambient soundscapes have been traditionally used to spark creativity and promote well-being. Sense.Seat is the first computational multisensorial seat that can be digitally controlled and vary the frequency and intensity of visual, auditory and olfactory stimulus. It is a new user interface shaped as a seat or pod that primes the user for inducing improved mood and cognition, therefore improving the work environment.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134164293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enabling Single-Handed Interaction in Mobile and Wearable Computing 在移动和可穿戴计算中实现单手交互
H. Yeo
Mobile and wearable computing are increasingly pervasive as people carry and use personal devices in everyday life. Screen sizes of such devices are becoming larger and smaller to accommodate both intimate and practical uses. Some mobile device screens are becoming larger to accommodate new experiences (e.g., phablet, tablet, eReader), whereas screen sizes on wearable devices are becoming smaller to allow them to fit into more places (e.g., smartwatch, wrist-band and eye-wear). However, these trends are making it difficult to use such devices with only one hand due to their placement, limited thumb reach and the fat-finger problem. This is especially true as there are many occasions when a user's other hand is occupied (encumbered) or not available. This thesis work explores, creates and studies novel interaction techniques that enable effective single-hand usage on mobile and wearable devices, empowering users to achieve more with their smart devices when only one hand is available.
随着人们在日常生活中携带和使用个人设备,移动和可穿戴计算越来越普遍。这些设备的屏幕尺寸正变得越来越大,以适应亲密和实际用途。一些移动设备的屏幕正变得越来越大,以适应新的体验(如平板电脑、平板电脑、电子书阅读器),而可穿戴设备的屏幕尺寸正变得越来越小,以适应更多的地方(如智能手表、腕带和眼镜)。然而,这些趋势使得单手使用这些设备变得困难,因为它们的放置位置、拇指的伸展范围有限以及肥胖的手指问题。在很多情况下,当用户的另一只手被占用(妨碍)或不可用时,这一点尤其正确。本论文探索、创造和研究了新颖的交互技术,使移动和可穿戴设备上的单手有效使用成为可能,使用户在只有一只手可用的情况下通过智能设备实现更多的功能。
{"title":"Enabling Single-Handed Interaction in Mobile and Wearable Computing","authors":"H. Yeo","doi":"10.1145/3266037.3266129","DOIUrl":"https://doi.org/10.1145/3266037.3266129","url":null,"abstract":"Mobile and wearable computing are increasingly pervasive as people carry and use personal devices in everyday life. Screen sizes of such devices are becoming larger and smaller to accommodate both intimate and practical uses. Some mobile device screens are becoming larger to accommodate new experiences (e.g., phablet, tablet, eReader), whereas screen sizes on wearable devices are becoming smaller to allow them to fit into more places (e.g., smartwatch, wrist-band and eye-wear). However, these trends are making it difficult to use such devices with only one hand due to their placement, limited thumb reach and the fat-finger problem. This is especially true as there are many occasions when a user's other hand is occupied (encumbered) or not available. This thesis work explores, creates and studies novel interaction techniques that enable effective single-hand usage on mobile and wearable devices, empowering users to achieve more with their smart devices when only one hand is available.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124728062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A WOZ Study of Feedforward Information on an Ambient Display in Autonomous Cars 自动驾驶汽车环境显示器前馈信息的WOZ研究
Hauke Sandhaus, E. Hornecker
We describe the development and user testing of an ambient display for autonomous vehicles. Instead of providing feedback about driving actions, once executed, it communicates driving decisions in advance, via light signals in passengers" peripheral vision. This ambient display was tested in an WoZ-based on-the-road-driving simulation of a fully autonomous vehicle. Findings from a preliminary study with 14 participants suggest that such a display might be particularly useful to communicate upcoming inertia changes for passengers.
我们描述了自动驾驶汽车环境显示器的开发和用户测试。一旦执行,它不会提供驾驶行为的反馈,而是通过乘客周边视觉中的灯光信号提前传达驾驶决策。这种环境显示在基于woz的全自动汽车的道路驾驶模拟中进行了测试。一项有14名参与者参与的初步研究表明,这样的显示可能对乘客传达即将到来的惯性变化特别有用。
{"title":"A WOZ Study of Feedforward Information on an Ambient Display in Autonomous Cars","authors":"Hauke Sandhaus, E. Hornecker","doi":"10.1145/3266037.3266111","DOIUrl":"https://doi.org/10.1145/3266037.3266111","url":null,"abstract":"We describe the development and user testing of an ambient display for autonomous vehicles. Instead of providing feedback about driving actions, once executed, it communicates driving decisions in advance, via light signals in passengers\" peripheral vision. This ambient display was tested in an WoZ-based on-the-road-driving simulation of a fully autonomous vehicle. Findings from a preliminary study with 14 participants suggest that such a display might be particularly useful to communicate upcoming inertia changes for passengers.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"242 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132966183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
EyeExpress: Expanding Hands-free Input Vocabulary using Eye Expressions EyeExpress:使用眼睛表情扩展免提输入词汇
Pin-Sung Ku, Te-Yen Wu, Mike Y. Chen
The muscles surrounding the human eye are capable of performing a wide range of expressions such as squinting, blinking, frowning, and raising eyebrows. This work explores the use of these ocular expressions to expand the input vocabularies of hands-free interactions. We conducted a series of user studies: 1) to understand which eye expressions users could consistently perform among all possible expressions, 2) to explore how these expressions can be used for hands-free interactions through a user-defined design process. Our study results showed that most participants could consistently perform 9 of the 18 possible eye expressions. Also, in the user define study the participants used the eye expressions to create hands-free interactions for the state-of-the-art augmented reality (AR) head-mounted displays.
人眼周围的肌肉能够做出各种各样的表情,比如眯眼、眨眼、皱眉和扬起眉毛。这项工作探讨了使用这些眼部表情来扩展免提交互的输入词汇。我们进行了一系列的用户研究:1)了解用户可以在所有可能的表情中始终表现出哪些眼部表情,2)探索如何通过用户定义的设计过程将这些表情用于免提交互。我们的研究结果表明,大多数参与者可以持续地做出18种可能的眼神表情中的9种。此外,在用户定义研究中,参与者使用眼睛表情为最先进的增强现实(AR)头戴式显示器创建免提交互。
{"title":"EyeExpress: Expanding Hands-free Input Vocabulary using Eye Expressions","authors":"Pin-Sung Ku, Te-Yen Wu, Mike Y. Chen","doi":"10.1145/3266037.3266123","DOIUrl":"https://doi.org/10.1145/3266037.3266123","url":null,"abstract":"The muscles surrounding the human eye are capable of performing a wide range of expressions such as squinting, blinking, frowning, and raising eyebrows. This work explores the use of these ocular expressions to expand the input vocabularies of hands-free interactions. We conducted a series of user studies: 1) to understand which eye expressions users could consistently perform among all possible expressions, 2) to explore how these expressions can be used for hands-free interactions through a user-defined design process. Our study results showed that most participants could consistently perform 9 of the 18 possible eye expressions. Also, in the user define study the participants used the eye expressions to create hands-free interactions for the state-of-the-art augmented reality (AR) head-mounted displays.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122124068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Haptopus: Haptic VR Experience Using Suction Mechanism Embedded in Head-mounted Display Haptopus:在头戴式显示器中嵌入吸力机制的触觉VR体验
Takayuki Kameoka, Yuki Kon, Takuto Nakamura, H. Kajimoto
With the spread of VR experiences using HMD, many proposals have been made to improve the experiences by providing tactile information to the fingertips. However, there are problems, such as difficulty attaching and detaching the devices and hindrances to free finger movement. To solve these issues, we developed "Haptopus," which embeds a tactile display in the HMD and presents tactile sensations to the face. In this paper, we conducted a preliminary investigation on the best suction pressure and compared Haptopus to conventional tactile presentation approaches. As a result, we confirmed that Haptopus improves the quality of the VR experience.
随着使用HMD的VR体验的普及,人们提出了许多通过向指尖提供触觉信息来改善体验的建议。然而,也存在一些问题,例如难以连接和分离设备以及手指自由运动的障碍。为了解决这些问题,我们开发了“Haptopus”,它在头戴式显示器中嵌入了一个触觉显示器,并向面部呈现触觉。在本文中,我们对Haptopus的最佳吸力压力进行了初步的研究,并将其与传统的触觉呈现方式进行了比较。因此,我们确认Haptopus提高了VR体验的质量。
{"title":"Haptopus: Haptic VR Experience Using Suction Mechanism Embedded in Head-mounted Display","authors":"Takayuki Kameoka, Yuki Kon, Takuto Nakamura, H. Kajimoto","doi":"10.1145/3266037.3271634","DOIUrl":"https://doi.org/10.1145/3266037.3271634","url":null,"abstract":"With the spread of VR experiences using HMD, many proposals have been made to improve the experiences by providing tactile information to the fingertips. However, there are problems, such as difficulty attaching and detaching the devices and hindrances to free finger movement. To solve these issues, we developed \"Haptopus,\" which embeds a tactile display in the HMD and presents tactile sensations to the face. In this paper, we conducted a preliminary investigation on the best suction pressure and compared Haptopus to conventional tactile presentation approaches. As a result, we confirmed that Haptopus improves the quality of the VR experience.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130130439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Mixed-Reality for Object-Focused Remote Collaboration 面向对象远程协作的混合现实
Martin Feick, Anthony Tang, Scott Bateman
In this paper we outline the design of a mixed-reality system to support object-focused remote collaboration. Here, being able to adjust collaborators' perspectives on the object as well as understand one another's perspective is essential to support effective collaboration over distance. We propose a low-cost mixed-reality system that allows users to: (1) quickly align and understand each other's perspective; (2) explore objects independently from one another, and (3) render gestures in the remote's workspace. In this work, we focus on the expert's role and we introduce an interaction technique allowing users to quickly manipulation 3D virtual objects in space.
本文概述了一个支持以对象为中心的远程协作的混合现实系统的设计。在这里,能够调整合作者对对象的观点以及理解彼此的观点对于支持有效的远程协作至关重要。我们提出了一种低成本的混合现实系统,使用户能够:(1)快速对齐并理解彼此的视角;(2)彼此独立地探索对象,(3)在远程工作空间中呈现手势。在这项工作中,我们专注于专家的角色,我们介绍了一种交互技术,允许用户快速操作空间中的3D虚拟对象。
{"title":"Mixed-Reality for Object-Focused Remote Collaboration","authors":"Martin Feick, Anthony Tang, Scott Bateman","doi":"10.1145/3266037.3266102","DOIUrl":"https://doi.org/10.1145/3266037.3266102","url":null,"abstract":"In this paper we outline the design of a mixed-reality system to support object-focused remote collaboration. Here, being able to adjust collaborators' perspectives on the object as well as understand one another's perspective is essential to support effective collaboration over distance. We propose a low-cost mixed-reality system that allows users to: (1) quickly align and understand each other's perspective; (2) explore objects independently from one another, and (3) render gestures in the remote's workspace. In this work, we focus on the expert's role and we introduce an interaction technique allowing users to quickly manipulation 3D virtual objects in space.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"404 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129470901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Haptic Interface Using Tendon Electrical Stimulation 使用肌腱电刺激的触觉界面
Akifumi Takahashi, K. Tanabe, H. Kajimoto
This demonstration corresponds to our previous paper, which deals with our finding that a proprioceptive force sensation can be presented by electrical stimulation from the skin surface to the tendon region (Tendon Electrical Stimulation: TES). We showed that TES can elicit a force sensation, and adjusting the current parameters can control the amount of the sensation. Unlike electrical muscle stimulation (EMS), which can also present force sensation by stimulating motor nerves to contract muscles, TES is thought to present a proprioceptive force sensation by stimulating receptors or sensory nerves responsible for recognizing the magnitude of the muscle contraction existing inside the tendon. In the demo, we offer the occasion for trying TES.
该演示与我们之前的论文相对应,该论文涉及我们的发现,即从皮肤表面到肌腱区域的电刺激可以呈现本体感觉力感觉(肌腱电刺激:TES)。我们证明了TES可以引起力感,并且调整电流参数可以控制感觉的量。与肌肉电刺激(EMS)不同,EMS也可以通过刺激运动神经收缩肌肉来呈现力感,而TES被认为是通过刺激负责识别肌腱内部肌肉收缩幅度的受体或感觉神经来呈现本体感觉力感。在演示中,我们提供了尝试TES的机会。
{"title":"Haptic Interface Using Tendon Electrical Stimulation","authors":"Akifumi Takahashi, K. Tanabe, H. Kajimoto","doi":"10.1145/3266037.3271640","DOIUrl":"https://doi.org/10.1145/3266037.3271640","url":null,"abstract":"This demonstration corresponds to our previous paper, which deals with our finding that a proprioceptive force sensation can be presented by electrical stimulation from the skin surface to the tendon region (Tendon Electrical Stimulation: TES). We showed that TES can elicit a force sensation, and adjusting the current parameters can control the amount of the sensation. Unlike electrical muscle stimulation (EMS), which can also present force sensation by stimulating motor nerves to contract muscles, TES is thought to present a proprioceptive force sensation by stimulating receptors or sensory nerves responsible for recognizing the magnitude of the muscle contraction existing inside the tendon. In the demo, we offer the occasion for trying TES.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121415199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1