首页 > 最新文献

Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
An Interactive Pipeline for Creating Visual Blends 用于创建视觉混合的交互式管道
Lydia B. Chilton, S. Petridis, Maneesh Agrawala
Visual blends are an advanced graphic design technique to draw users' attention to a message. They blend together two objects in a way that is novel and useful in conveying a message symbolically. This demo presents an interactive pipeline for creating visual blends that follows the iterative design process. Our pipeline decomposes the process into both computational techniques and human microtasks. It allows users to collaboratively generate visual blends with steps involving brainstorming, synthesis, and iteration. Our demo allows individual users to see how existing visual blends were made, edit or improve existing visual blends, and create new visual blends.
视觉融合是一种先进的图形设计技术,可以吸引用户的注意力。他们以一种新颖而有用的方式将两个物体混合在一起,象征性地传达信息。这个演示展示了一个交互式管道,用于创建遵循迭代设计过程的视觉混合。我们的流水线将流程分解为计算技术和人工微任务。它允许用户通过头脑风暴、综合和迭代等步骤协同生成视觉混合。我们的演示允许个人用户查看现有的视觉混合是如何制作的,编辑或改进现有的视觉混合,并创建新的视觉混合。
{"title":"An Interactive Pipeline for Creating Visual Blends","authors":"Lydia B. Chilton, S. Petridis, Maneesh Agrawala","doi":"10.1145/3266037.3271646","DOIUrl":"https://doi.org/10.1145/3266037.3271646","url":null,"abstract":"Visual blends are an advanced graphic design technique to draw users' attention to a message. They blend together two objects in a way that is novel and useful in conveying a message symbolically. This demo presents an interactive pipeline for creating visual blends that follows the iterative design process. Our pipeline decomposes the process into both computational techniques and human microtasks. It allows users to collaboratively generate visual blends with steps involving brainstorming, synthesis, and iteration. Our demo allows individual users to see how existing visual blends were made, edit or improve existing visual blends, and create new visual blends.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127910565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wearable Kinesthetic I/O Device for Sharing Muscle Compliance 共享肌肉顺应性的可穿戴动觉I/O设备
Jun Nishida, Kenji Suzuki
In this paper, we present a wearable kinesthetic I/O device, which is able to measure and intervene in multiple muscle activities simultaneously through the same electrodes. The developed system includes an I/O module, capable of measuring the electromyogram (EMG) of four muscle tissues, while applying electrical muscle stimulation (EMS) at the same time. The developed wearable system is configured in a scalable manner for achieving 1) high stimulus frequency (up to 70 Hz), 2) wearable dimensions in which the device can be placed along the limbs, and 3) flexibility of the number of I/O electrodes (up to 32 channels). In a pilot user study, which shared the wrist compliance between two persons, participants were able to recognize the level of their confederate's wrist joint compliance using a 4-point Likert scale. The developed system would benefit a physical therapist and a patient, during hand rehabilitation, using a peg board for sharing their wrist compliance and grip force, which are usually difficult to be observed in a visual contact.
在本文中,我们提出了一种可穿戴的动觉I/O设备,它能够通过相同的电极同时测量和干预多种肌肉活动。所开发的系统包括一个I/O模块,能够测量四个肌肉组织的肌电图(EMG),同时应用肌肉电刺激(EMS)。开发的可穿戴系统以可扩展的方式配置,以实现1)高刺激频率(高达70 Hz), 2)可穿戴尺寸,设备可以沿着四肢放置,以及3)I/O电极数量的灵活性(多达32个通道)。在一项试点用户研究中,参与者分享了两个人的手腕依从性,参与者能够使用4点李克特量表识别他们同伴的手腕关节依从性水平。开发的系统将有利于物理治疗师和患者,在手部康复期间,使用一个钉板来分享他们的手腕顺应性和握力,这通常很难在视觉接触中观察到。
{"title":"Wearable Kinesthetic I/O Device for Sharing Muscle Compliance","authors":"Jun Nishida, Kenji Suzuki","doi":"10.1145/3266037.3266100","DOIUrl":"https://doi.org/10.1145/3266037.3266100","url":null,"abstract":"In this paper, we present a wearable kinesthetic I/O device, which is able to measure and intervene in multiple muscle activities simultaneously through the same electrodes. The developed system includes an I/O module, capable of measuring the electromyogram (EMG) of four muscle tissues, while applying electrical muscle stimulation (EMS) at the same time. The developed wearable system is configured in a scalable manner for achieving 1) high stimulus frequency (up to 70 Hz), 2) wearable dimensions in which the device can be placed along the limbs, and 3) flexibility of the number of I/O electrodes (up to 32 channels). In a pilot user study, which shared the wrist compliance between two persons, participants were able to recognize the level of their confederate's wrist joint compliance using a 4-point Likert scale. The developed system would benefit a physical therapist and a patient, during hand rehabilitation, using a peg board for sharing their wrist compliance and grip force, which are usually difficult to be observed in a visual contact.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124977867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Pop-up Robotics: Facilitating HRI in Public Spaces 弹出式机器人:促进公共空间的人力资源管理
Swapna Joshi, S. Šabanović
Human-Robot Interaction (HRI) research in public spaces often encounters delays and restrictions due to several factors, including the need for sophisticated technology, regulatory approvals, and public or community support. To remedy these concerns, we suggest HRI can apply the core philosophy of Tactical Urbanism, a concept from urban planning, to catalyze HRI in public spaces, provide community feedback and information on the feasibility of future implementations of robots in the public, and also create social impact and forge connections with the community while spreading awareness about robots as a public resource. As a case study, we share tactics used and strategies followed to conduct a pop-up style study of 'A robotic mailbox to support and raise awareness about homelessness.' We discuss benefits and challenges of the pop-up approach and recommend using it to enable the social studies of HRI not only to match but to precede, the fast-paced technological advancement and deployment of robots.
公共空间的人机交互(HRI)研究经常遇到延迟和限制,原因包括需要复杂的技术,监管批准以及公众或社区支持。为了解决这些问题,我们建议HRI可以应用战术城市主义的核心理念,一个来自城市规划的概念,来催化公共空间的HRI,为社区提供关于未来在公共场所实施机器人的可行性的反馈和信息,并在传播机器人作为一种公共资源的意识的同时,创造社会影响并与社区建立联系。作为一个案例研究,我们分享了进行“机器人邮箱支持和提高对无家可归者的认识”的弹出式研究所使用的策略和遵循的策略。我们讨论了弹出式方法的好处和挑战,并建议使用它使人力资源研究所的社会研究不仅要匹配,而且要先于快节奏的技术进步和机器人的部署。
{"title":"Pop-up Robotics: Facilitating HRI in Public Spaces","authors":"Swapna Joshi, S. Šabanović","doi":"10.1145/3266037.3266125","DOIUrl":"https://doi.org/10.1145/3266037.3266125","url":null,"abstract":"Human-Robot Interaction (HRI) research in public spaces often encounters delays and restrictions due to several factors, including the need for sophisticated technology, regulatory approvals, and public or community support. To remedy these concerns, we suggest HRI can apply the core philosophy of Tactical Urbanism, a concept from urban planning, to catalyze HRI in public spaces, provide community feedback and information on the feasibility of future implementations of robots in the public, and also create social impact and forge connections with the community while spreading awareness about robots as a public resource. As a case study, we share tactics used and strategies followed to conduct a pop-up style study of 'A robotic mailbox to support and raise awareness about homelessness.' We discuss benefits and challenges of the pop-up approach and recommend using it to enable the social studies of HRI not only to match but to precede, the fast-paced technological advancement and deployment of robots.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114293828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Engagement Learning: Expanding Visual Knowledge by Engaging Online Participants 参与学习:通过参与在线参与者扩展视觉知识
Ranjay Krishna, Donsuk Lee, Li Fei-Fei, Michael S. Bernstein
Most artificial intelligence (AI) systems to date have focused entirely on performance, and rarely if at all on their social interactions with people and how to balance the AIs' goals against their human collaborators'. Learning quickly from interactions with people poses both social challenges and is unresolved technically. In this paper, we introduce engagement learning: a training approach that learns to trade off what the AI needs---the knowledge value of a label to the AI---against what people are interested to engage with---the engagement value of the label. We realize our goal with ELIA (Engagement Learning Interaction Agent), a conversational AI agent who's goal is to learn new facts about the visual world by asking engaging questions of people about the photos they upload to social media. Our current deployment of ELIA on Instagram receives a response rate of 26%.
迄今为止,大多数人工智能(AI)系统都完全专注于性能,很少关注与人的社交互动,以及如何平衡人工智能的目标与人类合作者的关系。从与人的互动中快速学习既带来了社会挑战,也带来了技术上的难题。在本文中,我们介绍了参与学习:一种训练方法,学习权衡人工智能需要什么——标签对人工智能的知识价值——和人们对参与感兴趣的东西——标签的参与价值。我们通过ELIA (Engagement Learning Interaction Agent)实现了我们的目标,ELIA是一个对话式人工智能代理,它的目标是通过向人们询问有关他们上传到社交媒体上的照片的引人入胜的问题来学习关于视觉世界的新事实。我们目前在Instagram上部署的ELIA收到了26%的回复率。
{"title":"Engagement Learning: Expanding Visual Knowledge by Engaging Online Participants","authors":"Ranjay Krishna, Donsuk Lee, Li Fei-Fei, Michael S. Bernstein","doi":"10.1145/3266037.3266110","DOIUrl":"https://doi.org/10.1145/3266037.3266110","url":null,"abstract":"Most artificial intelligence (AI) systems to date have focused entirely on performance, and rarely if at all on their social interactions with people and how to balance the AIs' goals against their human collaborators'. Learning quickly from interactions with people poses both social challenges and is unresolved technically. In this paper, we introduce engagement learning: a training approach that learns to trade off what the AI needs---the knowledge value of a label to the AI---against what people are interested to engage with---the engagement value of the label. We realize our goal with ELIA (Engagement Learning Interaction Agent), a conversational AI agent who's goal is to learn new facts about the visual world by asking engaging questions of people about the photos they upload to social media. Our current deployment of ELIA on Instagram receives a response rate of 26%.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126447734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Enabling Single-Handed Interaction in Mobile and Wearable Computing 在移动和可穿戴计算中实现单手交互
H. Yeo
Mobile and wearable computing are increasingly pervasive as people carry and use personal devices in everyday life. Screen sizes of such devices are becoming larger and smaller to accommodate both intimate and practical uses. Some mobile device screens are becoming larger to accommodate new experiences (e.g., phablet, tablet, eReader), whereas screen sizes on wearable devices are becoming smaller to allow them to fit into more places (e.g., smartwatch, wrist-band and eye-wear). However, these trends are making it difficult to use such devices with only one hand due to their placement, limited thumb reach and the fat-finger problem. This is especially true as there are many occasions when a user's other hand is occupied (encumbered) or not available. This thesis work explores, creates and studies novel interaction techniques that enable effective single-hand usage on mobile and wearable devices, empowering users to achieve more with their smart devices when only one hand is available.
随着人们在日常生活中携带和使用个人设备,移动和可穿戴计算越来越普遍。这些设备的屏幕尺寸正变得越来越大,以适应亲密和实际用途。一些移动设备的屏幕正变得越来越大,以适应新的体验(如平板电脑、平板电脑、电子书阅读器),而可穿戴设备的屏幕尺寸正变得越来越小,以适应更多的地方(如智能手表、腕带和眼镜)。然而,这些趋势使得单手使用这些设备变得困难,因为它们的放置位置、拇指的伸展范围有限以及肥胖的手指问题。在很多情况下,当用户的另一只手被占用(妨碍)或不可用时,这一点尤其正确。本论文探索、创造和研究了新颖的交互技术,使移动和可穿戴设备上的单手有效使用成为可能,使用户在只有一只手可用的情况下通过智能设备实现更多的功能。
{"title":"Enabling Single-Handed Interaction in Mobile and Wearable Computing","authors":"H. Yeo","doi":"10.1145/3266037.3266129","DOIUrl":"https://doi.org/10.1145/3266037.3266129","url":null,"abstract":"Mobile and wearable computing are increasingly pervasive as people carry and use personal devices in everyday life. Screen sizes of such devices are becoming larger and smaller to accommodate both intimate and practical uses. Some mobile device screens are becoming larger to accommodate new experiences (e.g., phablet, tablet, eReader), whereas screen sizes on wearable devices are becoming smaller to allow them to fit into more places (e.g., smartwatch, wrist-band and eye-wear). However, these trends are making it difficult to use such devices with only one hand due to their placement, limited thumb reach and the fat-finger problem. This is especially true as there are many occasions when a user's other hand is occupied (encumbered) or not available. This thesis work explores, creates and studies novel interaction techniques that enable effective single-hand usage on mobile and wearable devices, empowering users to achieve more with their smart devices when only one hand is available.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124728062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A WOZ Study of Feedforward Information on an Ambient Display in Autonomous Cars 自动驾驶汽车环境显示器前馈信息的WOZ研究
Hauke Sandhaus, E. Hornecker
We describe the development and user testing of an ambient display for autonomous vehicles. Instead of providing feedback about driving actions, once executed, it communicates driving decisions in advance, via light signals in passengers" peripheral vision. This ambient display was tested in an WoZ-based on-the-road-driving simulation of a fully autonomous vehicle. Findings from a preliminary study with 14 participants suggest that such a display might be particularly useful to communicate upcoming inertia changes for passengers.
我们描述了自动驾驶汽车环境显示器的开发和用户测试。一旦执行,它不会提供驾驶行为的反馈,而是通过乘客周边视觉中的灯光信号提前传达驾驶决策。这种环境显示在基于woz的全自动汽车的道路驾驶模拟中进行了测试。一项有14名参与者参与的初步研究表明,这样的显示可能对乘客传达即将到来的惯性变化特别有用。
{"title":"A WOZ Study of Feedforward Information on an Ambient Display in Autonomous Cars","authors":"Hauke Sandhaus, E. Hornecker","doi":"10.1145/3266037.3266111","DOIUrl":"https://doi.org/10.1145/3266037.3266111","url":null,"abstract":"We describe the development and user testing of an ambient display for autonomous vehicles. Instead of providing feedback about driving actions, once executed, it communicates driving decisions in advance, via light signals in passengers\" peripheral vision. This ambient display was tested in an WoZ-based on-the-road-driving simulation of a fully autonomous vehicle. Findings from a preliminary study with 14 participants suggest that such a display might be particularly useful to communicate upcoming inertia changes for passengers.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"242 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132966183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
EyeExpress: Expanding Hands-free Input Vocabulary using Eye Expressions EyeExpress:使用眼睛表情扩展免提输入词汇
Pin-Sung Ku, Te-Yen Wu, Mike Y. Chen
The muscles surrounding the human eye are capable of performing a wide range of expressions such as squinting, blinking, frowning, and raising eyebrows. This work explores the use of these ocular expressions to expand the input vocabularies of hands-free interactions. We conducted a series of user studies: 1) to understand which eye expressions users could consistently perform among all possible expressions, 2) to explore how these expressions can be used for hands-free interactions through a user-defined design process. Our study results showed that most participants could consistently perform 9 of the 18 possible eye expressions. Also, in the user define study the participants used the eye expressions to create hands-free interactions for the state-of-the-art augmented reality (AR) head-mounted displays.
人眼周围的肌肉能够做出各种各样的表情,比如眯眼、眨眼、皱眉和扬起眉毛。这项工作探讨了使用这些眼部表情来扩展免提交互的输入词汇。我们进行了一系列的用户研究:1)了解用户可以在所有可能的表情中始终表现出哪些眼部表情,2)探索如何通过用户定义的设计过程将这些表情用于免提交互。我们的研究结果表明,大多数参与者可以持续地做出18种可能的眼神表情中的9种。此外,在用户定义研究中,参与者使用眼睛表情为最先进的增强现实(AR)头戴式显示器创建免提交互。
{"title":"EyeExpress: Expanding Hands-free Input Vocabulary using Eye Expressions","authors":"Pin-Sung Ku, Te-Yen Wu, Mike Y. Chen","doi":"10.1145/3266037.3266123","DOIUrl":"https://doi.org/10.1145/3266037.3266123","url":null,"abstract":"The muscles surrounding the human eye are capable of performing a wide range of expressions such as squinting, blinking, frowning, and raising eyebrows. This work explores the use of these ocular expressions to expand the input vocabularies of hands-free interactions. We conducted a series of user studies: 1) to understand which eye expressions users could consistently perform among all possible expressions, 2) to explore how these expressions can be used for hands-free interactions through a user-defined design process. Our study results showed that most participants could consistently perform 9 of the 18 possible eye expressions. Also, in the user define study the participants used the eye expressions to create hands-free interactions for the state-of-the-art augmented reality (AR) head-mounted displays.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122124068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Haptopus: Haptic VR Experience Using Suction Mechanism Embedded in Head-mounted Display Haptopus:在头戴式显示器中嵌入吸力机制的触觉VR体验
Takayuki Kameoka, Yuki Kon, Takuto Nakamura, H. Kajimoto
With the spread of VR experiences using HMD, many proposals have been made to improve the experiences by providing tactile information to the fingertips. However, there are problems, such as difficulty attaching and detaching the devices and hindrances to free finger movement. To solve these issues, we developed "Haptopus," which embeds a tactile display in the HMD and presents tactile sensations to the face. In this paper, we conducted a preliminary investigation on the best suction pressure and compared Haptopus to conventional tactile presentation approaches. As a result, we confirmed that Haptopus improves the quality of the VR experience.
随着使用HMD的VR体验的普及,人们提出了许多通过向指尖提供触觉信息来改善体验的建议。然而,也存在一些问题,例如难以连接和分离设备以及手指自由运动的障碍。为了解决这些问题,我们开发了“Haptopus”,它在头戴式显示器中嵌入了一个触觉显示器,并向面部呈现触觉。在本文中,我们对Haptopus的最佳吸力压力进行了初步的研究,并将其与传统的触觉呈现方式进行了比较。因此,我们确认Haptopus提高了VR体验的质量。
{"title":"Haptopus: Haptic VR Experience Using Suction Mechanism Embedded in Head-mounted Display","authors":"Takayuki Kameoka, Yuki Kon, Takuto Nakamura, H. Kajimoto","doi":"10.1145/3266037.3271634","DOIUrl":"https://doi.org/10.1145/3266037.3271634","url":null,"abstract":"With the spread of VR experiences using HMD, many proposals have been made to improve the experiences by providing tactile information to the fingertips. However, there are problems, such as difficulty attaching and detaching the devices and hindrances to free finger movement. To solve these issues, we developed \"Haptopus,\" which embeds a tactile display in the HMD and presents tactile sensations to the face. In this paper, we conducted a preliminary investigation on the best suction pressure and compared Haptopus to conventional tactile presentation approaches. As a result, we confirmed that Haptopus improves the quality of the VR experience.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130130439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Mixed-Reality for Object-Focused Remote Collaboration 面向对象远程协作的混合现实
Martin Feick, Anthony Tang, Scott Bateman
In this paper we outline the design of a mixed-reality system to support object-focused remote collaboration. Here, being able to adjust collaborators' perspectives on the object as well as understand one another's perspective is essential to support effective collaboration over distance. We propose a low-cost mixed-reality system that allows users to: (1) quickly align and understand each other's perspective; (2) explore objects independently from one another, and (3) render gestures in the remote's workspace. In this work, we focus on the expert's role and we introduce an interaction technique allowing users to quickly manipulation 3D virtual objects in space.
本文概述了一个支持以对象为中心的远程协作的混合现实系统的设计。在这里,能够调整合作者对对象的观点以及理解彼此的观点对于支持有效的远程协作至关重要。我们提出了一种低成本的混合现实系统,使用户能够:(1)快速对齐并理解彼此的视角;(2)彼此独立地探索对象,(3)在远程工作空间中呈现手势。在这项工作中,我们专注于专家的角色,我们介绍了一种交互技术,允许用户快速操作空间中的3D虚拟对象。
{"title":"Mixed-Reality for Object-Focused Remote Collaboration","authors":"Martin Feick, Anthony Tang, Scott Bateman","doi":"10.1145/3266037.3266102","DOIUrl":"https://doi.org/10.1145/3266037.3266102","url":null,"abstract":"In this paper we outline the design of a mixed-reality system to support object-focused remote collaboration. Here, being able to adjust collaborators' perspectives on the object as well as understand one another's perspective is essential to support effective collaboration over distance. We propose a low-cost mixed-reality system that allows users to: (1) quickly align and understand each other's perspective; (2) explore objects independently from one another, and (3) render gestures in the remote's workspace. In this work, we focus on the expert's role and we introduce an interaction technique allowing users to quickly manipulation 3D virtual objects in space.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"404 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129470901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Haptic Interface Using Tendon Electrical Stimulation 使用肌腱电刺激的触觉界面
Akifumi Takahashi, K. Tanabe, H. Kajimoto
This demonstration corresponds to our previous paper, which deals with our finding that a proprioceptive force sensation can be presented by electrical stimulation from the skin surface to the tendon region (Tendon Electrical Stimulation: TES). We showed that TES can elicit a force sensation, and adjusting the current parameters can control the amount of the sensation. Unlike electrical muscle stimulation (EMS), which can also present force sensation by stimulating motor nerves to contract muscles, TES is thought to present a proprioceptive force sensation by stimulating receptors or sensory nerves responsible for recognizing the magnitude of the muscle contraction existing inside the tendon. In the demo, we offer the occasion for trying TES.
该演示与我们之前的论文相对应,该论文涉及我们的发现,即从皮肤表面到肌腱区域的电刺激可以呈现本体感觉力感觉(肌腱电刺激:TES)。我们证明了TES可以引起力感,并且调整电流参数可以控制感觉的量。与肌肉电刺激(EMS)不同,EMS也可以通过刺激运动神经收缩肌肉来呈现力感,而TES被认为是通过刺激负责识别肌腱内部肌肉收缩幅度的受体或感觉神经来呈现本体感觉力感。在演示中,我们提供了尝试TES的机会。
{"title":"Haptic Interface Using Tendon Electrical Stimulation","authors":"Akifumi Takahashi, K. Tanabe, H. Kajimoto","doi":"10.1145/3266037.3271640","DOIUrl":"https://doi.org/10.1145/3266037.3271640","url":null,"abstract":"This demonstration corresponds to our previous paper, which deals with our finding that a proprioceptive force sensation can be presented by electrical stimulation from the skin surface to the tendon region (Tendon Electrical Stimulation: TES). We showed that TES can elicit a force sensation, and adjusting the current parameters can control the amount of the sensation. Unlike electrical muscle stimulation (EMS), which can also present force sensation by stimulating motor nerves to contract muscles, TES is thought to present a proprioceptive force sensation by stimulating receptors or sensory nerves responsible for recognizing the magnitude of the muscle contraction existing inside the tendon. In the demo, we offer the occasion for trying TES.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121415199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1