首页 > 最新文献

Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
A Stretch-Flexible Textile Multitouch Sensor for User Input on Inflatable Membrane Structures & Non-Planar Surfaces 一种可拉伸-柔性纺织品多点触控传感器,用于用户在可充气膜结构和非平面上的输入
Kristian Gohlke, E. Hornecker
We present a textile sensor, capable of detecting multi-touch and multi-pressure input on non-planar surfaces and demonstrate how such sensors can be fabricated and integrated into pressure stabilized membrane envelopes (i.e. inflatables). Our sensor design is both stretchable and flexible/bendable and can conform to various three-dimensional surface geometries and shape-changing surfaces. We briefly outline an approach for basic signal acquisition from such sensors and how they can be leveraged to measure internal air-pressure of inflatable objects without specialized air-pressure sensors. We further demonstrate how standard electronic circuits can be integrated with malleable inflatable objects without the need for rigid enclosures for mechanical protection.
我们提出了一种纺织传感器,能够检测非平面表面上的多点触摸和多点压力输入,并演示了如何制造这种传感器并将其集成到压力稳定的膜包(即充气膜)中。我们的传感器设计既可拉伸又可弯曲,可以符合各种三维表面几何形状和形状变化表面。我们简要概述了从这种传感器获取基本信号的方法,以及如何利用它们来测量充气物体的内部气压,而不需要专门的气压传感器。我们进一步演示了如何将标准电子电路与可延展的充气物体集成在一起,而无需刚性外壳进行机械保护。
{"title":"A Stretch-Flexible Textile Multitouch Sensor for User Input on Inflatable Membrane Structures & Non-Planar Surfaces","authors":"Kristian Gohlke, E. Hornecker","doi":"10.1145/3266037.3271647","DOIUrl":"https://doi.org/10.1145/3266037.3271647","url":null,"abstract":"We present a textile sensor, capable of detecting multi-touch and multi-pressure input on non-planar surfaces and demonstrate how such sensors can be fabricated and integrated into pressure stabilized membrane envelopes (i.e. inflatables). Our sensor design is both stretchable and flexible/bendable and can conform to various three-dimensional surface geometries and shape-changing surfaces. We briefly outline an approach for basic signal acquisition from such sensors and how they can be leveraged to measure internal air-pressure of inflatable objects without specialized air-pressure sensors. We further demonstrate how standard electronic circuits can be integrated with malleable inflatable objects without the need for rigid enclosures for mechanical protection.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114094738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Investigation into Natural Gestures Using EMG for "SuperNatural" Interaction in VR 在VR中使用肌电图进行“超自然”互动的自然手势研究
Chloe Eghtebas, Sandro Weber, G. Klinker
Can natural interaction requirements be fulfilled while still harnessing the "supernatural" fantasy of Virtual Reality (VR)? In this work we used off the shelf Electromyogram (EMG) sensors as an input device which can afford natural gestures to preform the "supernatural" task of growing your arm in VR. We recorded 18 participants preforming a simple retrieval task in two phases; an initial and a learning phase where the stretch arm was disabled and enabled respectively. The results show that the gestures used in the initial phase are different than the main gestures used to retrieve an object in our system and that the times taken to complete the learning phase are highly variable across participants.
在利用虚拟现实(VR)的“超自然”幻想的同时,能否满足自然交互需求?在这项工作中,我们使用现成的肌电图(EMG)传感器作为输入设备,它可以提供自然手势来执行在VR中生长手臂的“超自然”任务。我们记录了18名参与者分两个阶段完成一个简单的检索任务;在初始阶段和学习阶段,伸展臂分别被禁用和启用。结果表明,在初始阶段使用的手势不同于在我们的系统中用于检索对象的主要手势,并且完成学习阶段所需的时间在参与者之间变化很大。
{"title":"Investigation into Natural Gestures Using EMG for \"SuperNatural\" Interaction in VR","authors":"Chloe Eghtebas, Sandro Weber, G. Klinker","doi":"10.1145/3266037.3266115","DOIUrl":"https://doi.org/10.1145/3266037.3266115","url":null,"abstract":"Can natural interaction requirements be fulfilled while still harnessing the \"supernatural\" fantasy of Virtual Reality (VR)? In this work we used off the shelf Electromyogram (EMG) sensors as an input device which can afford natural gestures to preform the \"supernatural\" task of growing your arm in VR. We recorded 18 participants preforming a simple retrieval task in two phases; an initial and a learning phase where the stretch arm was disabled and enabled respectively. The results show that the gestures used in the initial phase are different than the main gestures used to retrieve an object in our system and that the times taken to complete the learning phase are highly variable across participants.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115968162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Wearable Haptic Device that Presents the Haptics Sensation Corresponding to Three Fingers on the Forearm 可穿戴式触觉装置,呈现前臂上三个手指对应的触觉
Taha K. Moriyama, Takuto Nakamura, Hiyoyuki Kajimoto
In this demonstration, as an attempt of a new haptic presentation method for objects in virtual reality (VR) environment, we show a device that presents the haptic sensation of the fingertip on the forearm, not on the fingertip. This device adopts a five-bar linkage mechanism and it is possible to present the strength, direction of force. Compared with a fingertip mounted type displays, it is possible to address the issues of their weight and size which hinder the free movement of fingers. We have confirmed that the experiences in the VR environment is improved compared with without haptics cues situation regardless of without presenting haptics information directly to the fingertip.
在这次演示中,作为虚拟现实(VR)环境中物体的一种新的触觉呈现方法的尝试,我们展示了一种设备,它将指尖的触觉呈现在前臂上,而不是指尖上。本装置采用五杆联动机构,可呈现力的强度、方向。与指尖安装式显示器相比,它可以解决妨碍手指自由移动的重量和尺寸问题。我们已经证实,与没有触觉提示的情况相比,VR环境中的体验得到了改善,无论是否将触觉信息直接呈现给指尖。
{"title":"Wearable Haptic Device that Presents the Haptics Sensation Corresponding to Three Fingers on the Forearm","authors":"Taha K. Moriyama, Takuto Nakamura, Hiyoyuki Kajimoto","doi":"10.1145/3266037.3271633","DOIUrl":"https://doi.org/10.1145/3266037.3271633","url":null,"abstract":"In this demonstration, as an attempt of a new haptic presentation method for objects in virtual reality (VR) environment, we show a device that presents the haptic sensation of the fingertip on the forearm, not on the fingertip. This device adopts a five-bar linkage mechanism and it is possible to present the strength, direction of force. Compared with a fingertip mounted type displays, it is possible to address the issues of their weight and size which hinder the free movement of fingers. We have confirmed that the experiences in the VR environment is improved compared with without haptics cues situation regardless of without presenting haptics information directly to the fingertip.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"9 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124943693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards a Symbiotic Human-Machine Depth Sensor: Exploring 3D Gaze for Object Reconstruction 走向人机共生深度传感器:探索三维凝视对象重建
Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, A. Bulling, E. Rukzio
Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved using depth cameras. In this work we aim to explore how 3D gaze data can be used to enhance scene understanding for AR HMDs by envisioning a symbiotic human-machine depth camera, fusing depth data with 3D gaze information. We present a first proof of concept, exploring to what extend we are able to recognise what a user is looking at by plotting 3D gaze data. To measure 3D gaze, we implemented a vergence-based algorithm and built an eye tracking setup consisting of a Pupil Labs headset and an OptiTrack motion capture system, allowing us to measure 3D gaze inside a 50x50x50 cm volume. We show first 3D gaze plots of "gazed-at" objects and describe our vision of a symbiotic human-machine depth camera that combines a depth camera and human 3D gaze information.
眼动追踪有望成为未来增强现实(AR)头戴式显示器(hmd)的一个组成部分,因为它可以很容易地集成到现有的硬件中,并提供一种通用的交互方式。为了增强现实世界中的物体,AR头戴式显示器需要对场景进行三维理解,目前使用深度相机来解决这个问题。在这项工作中,我们的目标是通过设想一个共生人机深度相机,融合深度数据和3D凝视信息,探索如何使用3D凝视数据来增强AR头戴式设备的场景理解。我们提出了第一个概念证明,通过绘制3D凝视数据,探索我们能够在多大程度上识别用户正在看什么。为了测量3D凝视,我们实现了一种基于顶点的算法,并建立了一个眼动追踪装置,该装置由瞳孔实验室耳机和OptiTrack动作捕捉系统组成,使我们能够在50x50x50厘米的体积内测量3D凝视。我们展示了“凝视”对象的第一个3D凝视图,并描述了我们的共生人机深度相机的愿景,该相机结合了深度相机和人类3D凝视信息。
{"title":"Towards a Symbiotic Human-Machine Depth Sensor: Exploring 3D Gaze for Object Reconstruction","authors":"Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, A. Bulling, E. Rukzio","doi":"10.1145/3266037.3266119","DOIUrl":"https://doi.org/10.1145/3266037.3266119","url":null,"abstract":"Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved using depth cameras. In this work we aim to explore how 3D gaze data can be used to enhance scene understanding for AR HMDs by envisioning a symbiotic human-machine depth camera, fusing depth data with 3D gaze information. We present a first proof of concept, exploring to what extend we are able to recognise what a user is looking at by plotting 3D gaze data. To measure 3D gaze, we implemented a vergence-based algorithm and built an eye tracking setup consisting of a Pupil Labs headset and an OptiTrack motion capture system, allowing us to measure 3D gaze inside a 50x50x50 cm volume. We show first 3D gaze plots of \"gazed-at\" objects and describe our vision of a symbiotic human-machine depth camera that combines a depth camera and human 3D gaze information.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121349969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Hybrid Watch User Interfaces: Collaboration Between Electro-Mechanical Components and Analog Materials 混合手表用户界面:机电元件和模拟材料之间的协作
A. Olwal
We introduce programmable material and electro-mechanical control to enable a set of hybrid watch user interfaces that symbiotically leverage the joint strengths of electro-mechanical hands and a dynamic watch dial. This approach enables computation and connectivity with existing materials to preserve the inherent physical qualities and abilities of traditional analog watches. We augment the watch's mechanical hands with micro-stepper motors for control, positioning and mechanical expressivity. We extend the traditional watch dial with programmable pigments for non-emissive dynamic patterns. Together, these components enable a unique set of interaction techniques and user interfaces beyond their individual capabilities.
我们引入可编程材料和机电控制,以实现一套混合手表用户界面,共生地利用机电指针和动态表盘的联合优势。这种方法使计算和连接现有的材料,以保持固有的物理质量和传统的模拟手表的能力。我们用微型步进电机来增强手表的机械指针,以实现控制、定位和机械表现力。我们扩展了传统的手表表盘与可编程颜料的非发射动态图案。这些组件共同支持一组独特的交互技术和用户界面,超出了它们各自的功能。
{"title":"Hybrid Watch User Interfaces: Collaboration Between Electro-Mechanical Components and Analog Materials","authors":"A. Olwal","doi":"10.1145/3266037.3271650","DOIUrl":"https://doi.org/10.1145/3266037.3271650","url":null,"abstract":"We introduce programmable material and electro-mechanical control to enable a set of hybrid watch user interfaces that symbiotically leverage the joint strengths of electro-mechanical hands and a dynamic watch dial. This approach enables computation and connectivity with existing materials to preserve the inherent physical qualities and abilities of traditional analog watches. We augment the watch's mechanical hands with micro-stepper motors for control, positioning and mechanical expressivity. We extend the traditional watch dial with programmable pigments for non-emissive dynamic patterns. Together, these components enable a unique set of interaction techniques and user interfaces beyond their individual capabilities.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"152 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114048673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
reMi: Translating Ambient Sounds of Moment into Tangible and Shareable Memories through Animated Paper reMi:通过动画纸将瞬间的环境声音转化为有形和可共享的记忆
Kyung Yun Choi, Darle Shinsato, Shane Zhang, Ken Nakagaki, H. Ishii
We present a tangible memory notebook--reMi--that records the ambient sounds and translates them into a tangible and shareable memory using animated paper. The paper replays the recorded sounds and deforms its shape to generate synchronized motions with the sounds. Computer-mediated communication interfaces have allowed us to share, record and recall memories easily through visual records. However, those digital visual-cues that are trapped behind the device's 2D screen are not the only means to recall a memory we experienced with more than the sense of vision. To develop a new way to store, recall and share a memory, we investigate how tangible motion of a paper that represents sound can enhance the "reminiscence".
我们提出了一个有形的记忆笔记本-reMi-记录环境的声音,并将其转化为有形的和可共享的记忆使用动画纸。这种纸可以回放录制的声音,并改变其形状,以产生与声音同步的运动。以计算机为媒介的通信接口使我们能够通过视觉记录轻松地分享、记录和回忆记忆。然而,那些被困在设备2D屏幕后面的数字视觉线索并不是我们回忆视觉以外的记忆的唯一手段。为了开发一种存储、回忆和分享记忆的新方法,我们研究了代表声音的纸的有形运动如何增强“回忆”。
{"title":"reMi: Translating Ambient Sounds of Moment into Tangible and Shareable Memories through Animated Paper","authors":"Kyung Yun Choi, Darle Shinsato, Shane Zhang, Ken Nakagaki, H. Ishii","doi":"10.1145/3266037.3266109","DOIUrl":"https://doi.org/10.1145/3266037.3266109","url":null,"abstract":"We present a tangible memory notebook--reMi--that records the ambient sounds and translates them into a tangible and shareable memory using animated paper. The paper replays the recorded sounds and deforms its shape to generate synchronized motions with the sounds. Computer-mediated communication interfaces have allowed us to share, record and recall memories easily through visual records. However, those digital visual-cues that are trapped behind the device's 2D screen are not the only means to recall a memory we experienced with more than the sense of vision. To develop a new way to store, recall and share a memory, we investigate how tangible motion of a paper that represents sound can enhance the \"reminiscence\".","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131768946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Post-literate Programming: Linking Discussion and Code in Software Development Teams 后识字编程:软件开发团队中的讨论与代码的联系
Soya Park, Amy X. Zhang, David R Karger
The literate programming paradigm presents a program interleaved with natural language text explaining the code's rationale and logic. While this is great for program readers, the labor of creating literate programs deters most program authors from providing this text at authoring time. Instead, as we determine through interviews, developers provide their design rationales after the fact, in discussions with collaborators. We propose to capture these discussions and incorporate them into the code. We have prototyped a tool to link online discussion of code directly to the code it discusses. Incorporating these discussions incrementally creates post-literate programs that convey information to future developers.
识字编程范式呈现了一个穿插着解释代码基本原理和逻辑的自然语言文本的程序。虽然这对程序读者来说是件好事,但创建识字程序的工作阻碍了大多数程序作者在创作时提供此文本。相反,正如我们通过访谈确定的那样,开发人员会在事后与合作者讨论时提供他们的设计原理。我们建议捕获这些讨论并将它们合并到代码中。我们制作了一个工具原型,将在线讨论的代码直接链接到它所讨论的代码。将这些讨论逐渐结合起来,就可以创建将信息传递给未来开发人员的后识字程序。
{"title":"Post-literate Programming: Linking Discussion and Code in Software Development Teams","authors":"Soya Park, Amy X. Zhang, David R Karger","doi":"10.1145/3266037.3266098","DOIUrl":"https://doi.org/10.1145/3266037.3266098","url":null,"abstract":"The literate programming paradigm presents a program interleaved with natural language text explaining the code's rationale and logic. While this is great for program readers, the labor of creating literate programs deters most program authors from providing this text at authoring time. Instead, as we determine through interviews, developers provide their design rationales after the fact, in discussions with collaborators. We propose to capture these discussions and incorporate them into the code. We have prototyped a tool to link online discussion of code directly to the code it discusses. Incorporating these discussions incrementally creates post-literate programs that convey information to future developers.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134261814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Pop-up Robotics: Facilitating HRI in Public Spaces 弹出式机器人:促进公共空间的人力资源管理
Swapna Joshi, S. Šabanović
Human-Robot Interaction (HRI) research in public spaces often encounters delays and restrictions due to several factors, including the need for sophisticated technology, regulatory approvals, and public or community support. To remedy these concerns, we suggest HRI can apply the core philosophy of Tactical Urbanism, a concept from urban planning, to catalyze HRI in public spaces, provide community feedback and information on the feasibility of future implementations of robots in the public, and also create social impact and forge connections with the community while spreading awareness about robots as a public resource. As a case study, we share tactics used and strategies followed to conduct a pop-up style study of 'A robotic mailbox to support and raise awareness about homelessness.' We discuss benefits and challenges of the pop-up approach and recommend using it to enable the social studies of HRI not only to match but to precede, the fast-paced technological advancement and deployment of robots.
公共空间的人机交互(HRI)研究经常遇到延迟和限制,原因包括需要复杂的技术,监管批准以及公众或社区支持。为了解决这些问题,我们建议HRI可以应用战术城市主义的核心理念,一个来自城市规划的概念,来催化公共空间的HRI,为社区提供关于未来在公共场所实施机器人的可行性的反馈和信息,并在传播机器人作为一种公共资源的意识的同时,创造社会影响并与社区建立联系。作为一个案例研究,我们分享了进行“机器人邮箱支持和提高对无家可归者的认识”的弹出式研究所使用的策略和遵循的策略。我们讨论了弹出式方法的好处和挑战,并建议使用它使人力资源研究所的社会研究不仅要匹配,而且要先于快节奏的技术进步和机器人的部署。
{"title":"Pop-up Robotics: Facilitating HRI in Public Spaces","authors":"Swapna Joshi, S. Šabanović","doi":"10.1145/3266037.3266125","DOIUrl":"https://doi.org/10.1145/3266037.3266125","url":null,"abstract":"Human-Robot Interaction (HRI) research in public spaces often encounters delays and restrictions due to several factors, including the need for sophisticated technology, regulatory approvals, and public or community support. To remedy these concerns, we suggest HRI can apply the core philosophy of Tactical Urbanism, a concept from urban planning, to catalyze HRI in public spaces, provide community feedback and information on the feasibility of future implementations of robots in the public, and also create social impact and forge connections with the community while spreading awareness about robots as a public resource. As a case study, we share tactics used and strategies followed to conduct a pop-up style study of 'A robotic mailbox to support and raise awareness about homelessness.' We discuss benefits and challenges of the pop-up approach and recommend using it to enable the social studies of HRI not only to match but to precede, the fast-paced technological advancement and deployment of robots.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114293828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Augmented Collaboration in Shared Space Design with Shared Attention and Manipulation 共享空间设计中的增强协作与共享关注和操作
Yoonjeong Cha, Sungu Nam, M. Yi, Jaeseung Jeong, Woontack Woo
Augmented collaboration in a shared house design scenario has been studied widely with various approaches. However, those studies did not consider human perception. Our goal is to lower the user's perceptual load for augmented collaboration in shared space design scenarios. Applying attention theories, we implemented shared head gaze, shared selected object, and collaborative manipulation features in our system in two different versions with HoloLens. To investigate whether user perceptions of the two different versions differ, we conducted an experiment with 18 participants (9 pairs) and conducted a survey and semi-structured interviews. The results did not show significant differences between the two versions, but produced interesting insights. Based on the findings, we provide design guidelines for collaborative AR systems.
共享住宅设计场景中的增强协作已经通过各种方法得到了广泛的研究。然而,这些研究并没有考虑到人类的感知。我们的目标是降低用户在共享空间设计场景中增强协作的感知负荷。应用注意理论,我们在两个不同版本的HoloLens系统中实现了共享头部凝视、共享选择对象和协作操作功能。为了调查用户对两种不同版本的看法是否不同,我们对18名参与者(9对)进行了实验,并进行了调查和半结构化访谈。结果并没有显示出两个版本之间的显著差异,但却产生了有趣的见解。基于这些发现,我们提供了协作AR系统的设计指南。
{"title":"Augmented Collaboration in Shared Space Design with Shared Attention and Manipulation","authors":"Yoonjeong Cha, Sungu Nam, M. Yi, Jaeseung Jeong, Woontack Woo","doi":"10.1145/3266037.3266086","DOIUrl":"https://doi.org/10.1145/3266037.3266086","url":null,"abstract":"Augmented collaboration in a shared house design scenario has been studied widely with various approaches. However, those studies did not consider human perception. Our goal is to lower the user's perceptual load for augmented collaboration in shared space design scenarios. Applying attention theories, we implemented shared head gaze, shared selected object, and collaborative manipulation features in our system in two different versions with HoloLens. To investigate whether user perceptions of the two different versions differ, we conducted an experiment with 18 participants (9 pairs) and conducted a survey and semi-structured interviews. The results did not show significant differences between the two versions, but produced interesting insights. Based on the findings, we provide design guidelines for collaborative AR systems.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123842323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Engagement Learning: Expanding Visual Knowledge by Engaging Online Participants 参与学习:通过参与在线参与者扩展视觉知识
Ranjay Krishna, Donsuk Lee, Li Fei-Fei, Michael S. Bernstein
Most artificial intelligence (AI) systems to date have focused entirely on performance, and rarely if at all on their social interactions with people and how to balance the AIs' goals against their human collaborators'. Learning quickly from interactions with people poses both social challenges and is unresolved technically. In this paper, we introduce engagement learning: a training approach that learns to trade off what the AI needs---the knowledge value of a label to the AI---against what people are interested to engage with---the engagement value of the label. We realize our goal with ELIA (Engagement Learning Interaction Agent), a conversational AI agent who's goal is to learn new facts about the visual world by asking engaging questions of people about the photos they upload to social media. Our current deployment of ELIA on Instagram receives a response rate of 26%.
迄今为止,大多数人工智能(AI)系统都完全专注于性能,很少关注与人的社交互动,以及如何平衡人工智能的目标与人类合作者的关系。从与人的互动中快速学习既带来了社会挑战,也带来了技术上的难题。在本文中,我们介绍了参与学习:一种训练方法,学习权衡人工智能需要什么——标签对人工智能的知识价值——和人们对参与感兴趣的东西——标签的参与价值。我们通过ELIA (Engagement Learning Interaction Agent)实现了我们的目标,ELIA是一个对话式人工智能代理,它的目标是通过向人们询问有关他们上传到社交媒体上的照片的引人入胜的问题来学习关于视觉世界的新事实。我们目前在Instagram上部署的ELIA收到了26%的回复率。
{"title":"Engagement Learning: Expanding Visual Knowledge by Engaging Online Participants","authors":"Ranjay Krishna, Donsuk Lee, Li Fei-Fei, Michael S. Bernstein","doi":"10.1145/3266037.3266110","DOIUrl":"https://doi.org/10.1145/3266037.3266110","url":null,"abstract":"Most artificial intelligence (AI) systems to date have focused entirely on performance, and rarely if at all on their social interactions with people and how to balance the AIs' goals against their human collaborators'. Learning quickly from interactions with people poses both social challenges and is unresolved technically. In this paper, we introduce engagement learning: a training approach that learns to trade off what the AI needs---the knowledge value of a label to the AI---against what people are interested to engage with---the engagement value of the label. We realize our goal with ELIA (Engagement Learning Interaction Agent), a conversational AI agent who's goal is to learn new facts about the visual world by asking engaging questions of people about the photos they upload to social media. Our current deployment of ELIA on Instagram receives a response rate of 26%.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126447734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1