首页 > 最新文献

Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
A Stretch-Flexible Textile Multitouch Sensor for User Input on Inflatable Membrane Structures & Non-Planar Surfaces 一种可拉伸-柔性纺织品多点触控传感器,用于用户在可充气膜结构和非平面上的输入
Kristian Gohlke, E. Hornecker
We present a textile sensor, capable of detecting multi-touch and multi-pressure input on non-planar surfaces and demonstrate how such sensors can be fabricated and integrated into pressure stabilized membrane envelopes (i.e. inflatables). Our sensor design is both stretchable and flexible/bendable and can conform to various three-dimensional surface geometries and shape-changing surfaces. We briefly outline an approach for basic signal acquisition from such sensors and how they can be leveraged to measure internal air-pressure of inflatable objects without specialized air-pressure sensors. We further demonstrate how standard electronic circuits can be integrated with malleable inflatable objects without the need for rigid enclosures for mechanical protection.
我们提出了一种纺织传感器,能够检测非平面表面上的多点触摸和多点压力输入,并演示了如何制造这种传感器并将其集成到压力稳定的膜包(即充气膜)中。我们的传感器设计既可拉伸又可弯曲,可以符合各种三维表面几何形状和形状变化表面。我们简要概述了从这种传感器获取基本信号的方法,以及如何利用它们来测量充气物体的内部气压,而不需要专门的气压传感器。我们进一步演示了如何将标准电子电路与可延展的充气物体集成在一起,而无需刚性外壳进行机械保护。
{"title":"A Stretch-Flexible Textile Multitouch Sensor for User Input on Inflatable Membrane Structures & Non-Planar Surfaces","authors":"Kristian Gohlke, E. Hornecker","doi":"10.1145/3266037.3271647","DOIUrl":"https://doi.org/10.1145/3266037.3271647","url":null,"abstract":"We present a textile sensor, capable of detecting multi-touch and multi-pressure input on non-planar surfaces and demonstrate how such sensors can be fabricated and integrated into pressure stabilized membrane envelopes (i.e. inflatables). Our sensor design is both stretchable and flexible/bendable and can conform to various three-dimensional surface geometries and shape-changing surfaces. We briefly outline an approach for basic signal acquisition from such sensors and how they can be leveraged to measure internal air-pressure of inflatable objects without specialized air-pressure sensors. We further demonstrate how standard electronic circuits can be integrated with malleable inflatable objects without the need for rigid enclosures for mechanical protection.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114094738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Investigation into Natural Gestures Using EMG for "SuperNatural" Interaction in VR 在VR中使用肌电图进行“超自然”互动的自然手势研究
Chloe Eghtebas, Sandro Weber, G. Klinker
Can natural interaction requirements be fulfilled while still harnessing the "supernatural" fantasy of Virtual Reality (VR)? In this work we used off the shelf Electromyogram (EMG) sensors as an input device which can afford natural gestures to preform the "supernatural" task of growing your arm in VR. We recorded 18 participants preforming a simple retrieval task in two phases; an initial and a learning phase where the stretch arm was disabled and enabled respectively. The results show that the gestures used in the initial phase are different than the main gestures used to retrieve an object in our system and that the times taken to complete the learning phase are highly variable across participants.
在利用虚拟现实(VR)的“超自然”幻想的同时,能否满足自然交互需求?在这项工作中,我们使用现成的肌电图(EMG)传感器作为输入设备,它可以提供自然手势来执行在VR中生长手臂的“超自然”任务。我们记录了18名参与者分两个阶段完成一个简单的检索任务;在初始阶段和学习阶段,伸展臂分别被禁用和启用。结果表明,在初始阶段使用的手势不同于在我们的系统中用于检索对象的主要手势,并且完成学习阶段所需的时间在参与者之间变化很大。
{"title":"Investigation into Natural Gestures Using EMG for \"SuperNatural\" Interaction in VR","authors":"Chloe Eghtebas, Sandro Weber, G. Klinker","doi":"10.1145/3266037.3266115","DOIUrl":"https://doi.org/10.1145/3266037.3266115","url":null,"abstract":"Can natural interaction requirements be fulfilled while still harnessing the \"supernatural\" fantasy of Virtual Reality (VR)? In this work we used off the shelf Electromyogram (EMG) sensors as an input device which can afford natural gestures to preform the \"supernatural\" task of growing your arm in VR. We recorded 18 participants preforming a simple retrieval task in two phases; an initial and a learning phase where the stretch arm was disabled and enabled respectively. The results show that the gestures used in the initial phase are different than the main gestures used to retrieve an object in our system and that the times taken to complete the learning phase are highly variable across participants.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115968162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Wearable Haptic Device that Presents the Haptics Sensation Corresponding to Three Fingers on the Forearm 可穿戴式触觉装置,呈现前臂上三个手指对应的触觉
Taha K. Moriyama, Takuto Nakamura, Hiyoyuki Kajimoto
In this demonstration, as an attempt of a new haptic presentation method for objects in virtual reality (VR) environment, we show a device that presents the haptic sensation of the fingertip on the forearm, not on the fingertip. This device adopts a five-bar linkage mechanism and it is possible to present the strength, direction of force. Compared with a fingertip mounted type displays, it is possible to address the issues of their weight and size which hinder the free movement of fingers. We have confirmed that the experiences in the VR environment is improved compared with without haptics cues situation regardless of without presenting haptics information directly to the fingertip.
在这次演示中,作为虚拟现实(VR)环境中物体的一种新的触觉呈现方法的尝试,我们展示了一种设备,它将指尖的触觉呈现在前臂上,而不是指尖上。本装置采用五杆联动机构,可呈现力的强度、方向。与指尖安装式显示器相比,它可以解决妨碍手指自由移动的重量和尺寸问题。我们已经证实,与没有触觉提示的情况相比,VR环境中的体验得到了改善,无论是否将触觉信息直接呈现给指尖。
{"title":"Wearable Haptic Device that Presents the Haptics Sensation Corresponding to Three Fingers on the Forearm","authors":"Taha K. Moriyama, Takuto Nakamura, Hiyoyuki Kajimoto","doi":"10.1145/3266037.3271633","DOIUrl":"https://doi.org/10.1145/3266037.3271633","url":null,"abstract":"In this demonstration, as an attempt of a new haptic presentation method for objects in virtual reality (VR) environment, we show a device that presents the haptic sensation of the fingertip on the forearm, not on the fingertip. This device adopts a five-bar linkage mechanism and it is possible to present the strength, direction of force. Compared with a fingertip mounted type displays, it is possible to address the issues of their weight and size which hinder the free movement of fingers. We have confirmed that the experiences in the VR environment is improved compared with without haptics cues situation regardless of without presenting haptics information directly to the fingertip.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"9 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124943693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards a Symbiotic Human-Machine Depth Sensor: Exploring 3D Gaze for Object Reconstruction 走向人机共生深度传感器:探索三维凝视对象重建
Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, A. Bulling, E. Rukzio
Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved using depth cameras. In this work we aim to explore how 3D gaze data can be used to enhance scene understanding for AR HMDs by envisioning a symbiotic human-machine depth camera, fusing depth data with 3D gaze information. We present a first proof of concept, exploring to what extend we are able to recognise what a user is looking at by plotting 3D gaze data. To measure 3D gaze, we implemented a vergence-based algorithm and built an eye tracking setup consisting of a Pupil Labs headset and an OptiTrack motion capture system, allowing us to measure 3D gaze inside a 50x50x50 cm volume. We show first 3D gaze plots of "gazed-at" objects and describe our vision of a symbiotic human-machine depth camera that combines a depth camera and human 3D gaze information.
眼动追踪有望成为未来增强现实(AR)头戴式显示器(hmd)的一个组成部分,因为它可以很容易地集成到现有的硬件中,并提供一种通用的交互方式。为了增强现实世界中的物体,AR头戴式显示器需要对场景进行三维理解,目前使用深度相机来解决这个问题。在这项工作中,我们的目标是通过设想一个共生人机深度相机,融合深度数据和3D凝视信息,探索如何使用3D凝视数据来增强AR头戴式设备的场景理解。我们提出了第一个概念证明,通过绘制3D凝视数据,探索我们能够在多大程度上识别用户正在看什么。为了测量3D凝视,我们实现了一种基于顶点的算法,并建立了一个眼动追踪装置,该装置由瞳孔实验室耳机和OptiTrack动作捕捉系统组成,使我们能够在50x50x50厘米的体积内测量3D凝视。我们展示了“凝视”对象的第一个3D凝视图,并描述了我们的共生人机深度相机的愿景,该相机结合了深度相机和人类3D凝视信息。
{"title":"Towards a Symbiotic Human-Machine Depth Sensor: Exploring 3D Gaze for Object Reconstruction","authors":"Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, A. Bulling, E. Rukzio","doi":"10.1145/3266037.3266119","DOIUrl":"https://doi.org/10.1145/3266037.3266119","url":null,"abstract":"Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved using depth cameras. In this work we aim to explore how 3D gaze data can be used to enhance scene understanding for AR HMDs by envisioning a symbiotic human-machine depth camera, fusing depth data with 3D gaze information. We present a first proof of concept, exploring to what extend we are able to recognise what a user is looking at by plotting 3D gaze data. To measure 3D gaze, we implemented a vergence-based algorithm and built an eye tracking setup consisting of a Pupil Labs headset and an OptiTrack motion capture system, allowing us to measure 3D gaze inside a 50x50x50 cm volume. We show first 3D gaze plots of \"gazed-at\" objects and describe our vision of a symbiotic human-machine depth camera that combines a depth camera and human 3D gaze information.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121349969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Hybrid Watch User Interfaces: Collaboration Between Electro-Mechanical Components and Analog Materials 混合手表用户界面:机电元件和模拟材料之间的协作
A. Olwal
We introduce programmable material and electro-mechanical control to enable a set of hybrid watch user interfaces that symbiotically leverage the joint strengths of electro-mechanical hands and a dynamic watch dial. This approach enables computation and connectivity with existing materials to preserve the inherent physical qualities and abilities of traditional analog watches. We augment the watch's mechanical hands with micro-stepper motors for control, positioning and mechanical expressivity. We extend the traditional watch dial with programmable pigments for non-emissive dynamic patterns. Together, these components enable a unique set of interaction techniques and user interfaces beyond their individual capabilities.
我们引入可编程材料和机电控制,以实现一套混合手表用户界面,共生地利用机电指针和动态表盘的联合优势。这种方法使计算和连接现有的材料,以保持固有的物理质量和传统的模拟手表的能力。我们用微型步进电机来增强手表的机械指针,以实现控制、定位和机械表现力。我们扩展了传统的手表表盘与可编程颜料的非发射动态图案。这些组件共同支持一组独特的交互技术和用户界面,超出了它们各自的功能。
{"title":"Hybrid Watch User Interfaces: Collaboration Between Electro-Mechanical Components and Analog Materials","authors":"A. Olwal","doi":"10.1145/3266037.3271650","DOIUrl":"https://doi.org/10.1145/3266037.3271650","url":null,"abstract":"We introduce programmable material and electro-mechanical control to enable a set of hybrid watch user interfaces that symbiotically leverage the joint strengths of electro-mechanical hands and a dynamic watch dial. This approach enables computation and connectivity with existing materials to preserve the inherent physical qualities and abilities of traditional analog watches. We augment the watch's mechanical hands with micro-stepper motors for control, positioning and mechanical expressivity. We extend the traditional watch dial with programmable pigments for non-emissive dynamic patterns. Together, these components enable a unique set of interaction techniques and user interfaces beyond their individual capabilities.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"152 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114048673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
reMi: Translating Ambient Sounds of Moment into Tangible and Shareable Memories through Animated Paper reMi:通过动画纸将瞬间的环境声音转化为有形和可共享的记忆
Kyung Yun Choi, Darle Shinsato, Shane Zhang, Ken Nakagaki, H. Ishii
We present a tangible memory notebook--reMi--that records the ambient sounds and translates them into a tangible and shareable memory using animated paper. The paper replays the recorded sounds and deforms its shape to generate synchronized motions with the sounds. Computer-mediated communication interfaces have allowed us to share, record and recall memories easily through visual records. However, those digital visual-cues that are trapped behind the device's 2D screen are not the only means to recall a memory we experienced with more than the sense of vision. To develop a new way to store, recall and share a memory, we investigate how tangible motion of a paper that represents sound can enhance the "reminiscence".
我们提出了一个有形的记忆笔记本-reMi-记录环境的声音,并将其转化为有形的和可共享的记忆使用动画纸。这种纸可以回放录制的声音,并改变其形状,以产生与声音同步的运动。以计算机为媒介的通信接口使我们能够通过视觉记录轻松地分享、记录和回忆记忆。然而,那些被困在设备2D屏幕后面的数字视觉线索并不是我们回忆视觉以外的记忆的唯一手段。为了开发一种存储、回忆和分享记忆的新方法,我们研究了代表声音的纸的有形运动如何增强“回忆”。
{"title":"reMi: Translating Ambient Sounds of Moment into Tangible and Shareable Memories through Animated Paper","authors":"Kyung Yun Choi, Darle Shinsato, Shane Zhang, Ken Nakagaki, H. Ishii","doi":"10.1145/3266037.3266109","DOIUrl":"https://doi.org/10.1145/3266037.3266109","url":null,"abstract":"We present a tangible memory notebook--reMi--that records the ambient sounds and translates them into a tangible and shareable memory using animated paper. The paper replays the recorded sounds and deforms its shape to generate synchronized motions with the sounds. Computer-mediated communication interfaces have allowed us to share, record and recall memories easily through visual records. However, those digital visual-cues that are trapped behind the device's 2D screen are not the only means to recall a memory we experienced with more than the sense of vision. To develop a new way to store, recall and share a memory, we investigate how tangible motion of a paper that represents sound can enhance the \"reminiscence\".","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131768946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Post-literate Programming: Linking Discussion and Code in Software Development Teams 后识字编程:软件开发团队中的讨论与代码的联系
Soya Park, Amy X. Zhang, David R Karger
The literate programming paradigm presents a program interleaved with natural language text explaining the code's rationale and logic. While this is great for program readers, the labor of creating literate programs deters most program authors from providing this text at authoring time. Instead, as we determine through interviews, developers provide their design rationales after the fact, in discussions with collaborators. We propose to capture these discussions and incorporate them into the code. We have prototyped a tool to link online discussion of code directly to the code it discusses. Incorporating these discussions incrementally creates post-literate programs that convey information to future developers.
识字编程范式呈现了一个穿插着解释代码基本原理和逻辑的自然语言文本的程序。虽然这对程序读者来说是件好事,但创建识字程序的工作阻碍了大多数程序作者在创作时提供此文本。相反,正如我们通过访谈确定的那样,开发人员会在事后与合作者讨论时提供他们的设计原理。我们建议捕获这些讨论并将它们合并到代码中。我们制作了一个工具原型,将在线讨论的代码直接链接到它所讨论的代码。将这些讨论逐渐结合起来,就可以创建将信息传递给未来开发人员的后识字程序。
{"title":"Post-literate Programming: Linking Discussion and Code in Software Development Teams","authors":"Soya Park, Amy X. Zhang, David R Karger","doi":"10.1145/3266037.3266098","DOIUrl":"https://doi.org/10.1145/3266037.3266098","url":null,"abstract":"The literate programming paradigm presents a program interleaved with natural language text explaining the code's rationale and logic. While this is great for program readers, the labor of creating literate programs deters most program authors from providing this text at authoring time. Instead, as we determine through interviews, developers provide their design rationales after the fact, in discussions with collaborators. We propose to capture these discussions and incorporate them into the code. We have prototyped a tool to link online discussion of code directly to the code it discusses. Incorporating these discussions incrementally creates post-literate programs that convey information to future developers.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134261814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Sense.Seat: Inducing Improved Mood and Cognition through Multisensorial Priming 有意义的。座位:通过多感觉启动诱导改善情绪和认知
Pedro F. Campos, Diogo Cabral, Frederica Gonçalves
User interface software and technologies have been evolving significantly and rapidly. This poster presents a breakthrough user experience that leverages multisensorial priming and embedded interaction and introduces an interactive piece of furniture called Sense.Seat. Sensory stimuli such as calm colors, lavender and other scents as well as ambient soundscapes have been traditionally used to spark creativity and promote well-being. Sense.Seat is the first computational multisensorial seat that can be digitally controlled and vary the frequency and intensity of visual, auditory and olfactory stimulus. It is a new user interface shaped as a seat or pod that primes the user for inducing improved mood and cognition, therefore improving the work environment.
用户界面软件和技术一直在显著而迅速地发展。这张海报展示了一种突破性的用户体验,它利用了多感官启动和嵌入式交互,并介绍了一种名为Sense.Seat的交互式家具。感官刺激,如平静的颜色、薰衣草和其他气味,以及环境声景,传统上被用来激发创造力和促进幸福感。有意义的。Seat是第一款可计算的多感官座椅,可以通过数字控制,改变视觉、听觉和嗅觉刺激的频率和强度。这是一个新的用户界面,形状像一个座位或吊舱,为用户诱导改善情绪和认知,从而改善工作环境。
{"title":"Sense.Seat: Inducing Improved Mood and Cognition through Multisensorial Priming","authors":"Pedro F. Campos, Diogo Cabral, Frederica Gonçalves","doi":"10.1145/3266037.3266105","DOIUrl":"https://doi.org/10.1145/3266037.3266105","url":null,"abstract":"User interface software and technologies have been evolving significantly and rapidly. This poster presents a breakthrough user experience that leverages multisensorial priming and embedded interaction and introduces an interactive piece of furniture called Sense.Seat. Sensory stimuli such as calm colors, lavender and other scents as well as ambient soundscapes have been traditionally used to spark creativity and promote well-being. Sense.Seat is the first computational multisensorial seat that can be digitally controlled and vary the frequency and intensity of visual, auditory and olfactory stimulus. It is a new user interface shaped as a seat or pod that primes the user for inducing improved mood and cognition, therefore improving the work environment.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134164293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive Tangrami: Rapid Prototyping with Modular Paper-folded Electronics 互动彩绘:快速原型与模块化纸折叠电子
Michael Wessely, Nadiya Morenko, Jürgen Steimle, M. Schmitz
Prototyping interactive objects with personal fabrication tools like 3D printers requires the maker to create subsequent design artifacts from scratch which produces unnecessary waste and does not allow to reuse functional components. We present Interactive Tangrami, paper-folded and reusable building blocks (Tangramis) that can contain various sensor input and visual output capabilities. We propose a digital design toolkit that lets the user plan the shape and functionality of a design piece. The software manages the communication to the physical artifact and streams the interaction data via the Open Sound protocol (OSC) to an application prototyping system (e.g. MaxMSP). The building blocks are fabricated digitally with a rapid and inexpensive ink-jet printing method. Our systems allows to prototype physical user interfaces within minutes and without knowledge of the underlying technologies. We demo its usefulness with two application examples.
使用个人制造工具(如3D打印机)对交互式对象进行原型设计需要制造商从头开始创建后续设计工件,这会产生不必要的浪费,并且不允许重用功能组件。我们展示了交互式七巧板,纸折叠和可重复使用的构建块(七巧板),可以包含各种传感器输入和视觉输出功能。我们提出了一个数字设计工具包,让用户规划设计作品的形状和功能。软件管理与物理工件的通信,并通过开放声音协议(OSC)将交互数据流传输到应用原型系统(例如MaxMSP)。构建模块是用快速和廉价的喷墨印刷方法数字化制造的。我们的系统可以在几分钟内完成物理用户界面的原型,而无需了解底层技术。我们通过两个应用程序示例来演示它的有用性。
{"title":"Interactive Tangrami: Rapid Prototyping with Modular Paper-folded Electronics","authors":"Michael Wessely, Nadiya Morenko, Jürgen Steimle, M. Schmitz","doi":"10.1145/3266037.3271630","DOIUrl":"https://doi.org/10.1145/3266037.3271630","url":null,"abstract":"Prototyping interactive objects with personal fabrication tools like 3D printers requires the maker to create subsequent design artifacts from scratch which produces unnecessary waste and does not allow to reuse functional components. We present Interactive Tangrami, paper-folded and reusable building blocks (Tangramis) that can contain various sensor input and visual output capabilities. We propose a digital design toolkit that lets the user plan the shape and functionality of a design piece. The software manages the communication to the physical artifact and streams the interaction data via the Open Sound protocol (OSC) to an application prototyping system (e.g. MaxMSP). The building blocks are fabricated digitally with a rapid and inexpensive ink-jet printing method. Our systems allows to prototype physical user interfaces within minutes and without knowledge of the underlying technologies. We demo its usefulness with two application examples.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116952470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Augmented Collaboration in Shared Space Design with Shared Attention and Manipulation 共享空间设计中的增强协作与共享关注和操作
Yoonjeong Cha, Sungu Nam, M. Yi, Jaeseung Jeong, Woontack Woo
Augmented collaboration in a shared house design scenario has been studied widely with various approaches. However, those studies did not consider human perception. Our goal is to lower the user's perceptual load for augmented collaboration in shared space design scenarios. Applying attention theories, we implemented shared head gaze, shared selected object, and collaborative manipulation features in our system in two different versions with HoloLens. To investigate whether user perceptions of the two different versions differ, we conducted an experiment with 18 participants (9 pairs) and conducted a survey and semi-structured interviews. The results did not show significant differences between the two versions, but produced interesting insights. Based on the findings, we provide design guidelines for collaborative AR systems.
共享住宅设计场景中的增强协作已经通过各种方法得到了广泛的研究。然而,这些研究并没有考虑到人类的感知。我们的目标是降低用户在共享空间设计场景中增强协作的感知负荷。应用注意理论,我们在两个不同版本的HoloLens系统中实现了共享头部凝视、共享选择对象和协作操作功能。为了调查用户对两种不同版本的看法是否不同,我们对18名参与者(9对)进行了实验,并进行了调查和半结构化访谈。结果并没有显示出两个版本之间的显著差异,但却产生了有趣的见解。基于这些发现,我们提供了协作AR系统的设计指南。
{"title":"Augmented Collaboration in Shared Space Design with Shared Attention and Manipulation","authors":"Yoonjeong Cha, Sungu Nam, M. Yi, Jaeseung Jeong, Woontack Woo","doi":"10.1145/3266037.3266086","DOIUrl":"https://doi.org/10.1145/3266037.3266086","url":null,"abstract":"Augmented collaboration in a shared house design scenario has been studied widely with various approaches. However, those studies did not consider human perception. Our goal is to lower the user's perceptual load for augmented collaboration in shared space design scenarios. Applying attention theories, we implemented shared head gaze, shared selected object, and collaborative manipulation features in our system in two different versions with HoloLens. To investigate whether user perceptions of the two different versions differ, we conducted an experiment with 18 participants (9 pairs) and conducted a survey and semi-structured interviews. The results did not show significant differences between the two versions, but produced interesting insights. Based on the findings, we provide design guidelines for collaborative AR systems.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123842323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1