首页 > 最新文献

Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology最新文献

英文 中文
Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements 轨道:使用平滑追踪眼球运动的智能手表的凝视交互
Augusto Esteves, Eduardo Velloso, A. Bulling, Hans-Werner Gellersen
We introduce Orbits, a novel gaze interaction technique that enables hands-free input on smart watches. The technique relies on moving controls to leverage the smooth pursuit movements of the eyes and detect whether and at which control the user is looking at. In Orbits, controls include targets that move in a circular trajectory in the face of the watch, and can be selected by following the desired one for a small amount of time. We conducted two user studies to assess the technique's recognition and robustness, which demonstrated how Orbits is robust against false positives triggered by natural eye movements and how it presents a hands-free, high accuracy way of interacting with smart watches using off-the-shelf devices. Finally, we developed three example interfaces built with Orbits: a music player, a notifications face plate and a missed call menu. Despite relying on moving controls -- very unusual in current HCI interfaces -- these were generally well received by participants in a third and final study.
我们介绍了orbit,这是一种新颖的凝视交互技术,可以在智能手表上实现免提输入。这项技术依赖于移动控制来利用眼睛的平滑追踪运动,并检测用户是否在看以及在看哪个控制。在“轨道”中,控制包括在手表面前以圆形轨迹移动的目标,并且可以通过在一小段时间内跟随所需的目标来选择。我们进行了两项用户研究,以评估该技术的识别能力和稳健性,展示了orbit如何抵御由自然眼球运动引发的误报,以及它如何提供一种使用现成设备与智能手表进行交互的免提、高精度方式。最后,我们开发了三个使用轨道构建的示例界面:音乐播放器、通知面板和未接来电菜单。尽管依赖于移动控制——这在当前的人机交互界面中非常不寻常——但在第三次也是最后一次研究中,参与者普遍接受了这些控制。
{"title":"Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements","authors":"Augusto Esteves, Eduardo Velloso, A. Bulling, Hans-Werner Gellersen","doi":"10.1145/2807442.2807499","DOIUrl":"https://doi.org/10.1145/2807442.2807499","url":null,"abstract":"We introduce Orbits, a novel gaze interaction technique that enables hands-free input on smart watches. The technique relies on moving controls to leverage the smooth pursuit movements of the eyes and detect whether and at which control the user is looking at. In Orbits, controls include targets that move in a circular trajectory in the face of the watch, and can be selected by following the desired one for a small amount of time. We conducted two user studies to assess the technique's recognition and robustness, which demonstrated how Orbits is robust against false positives triggered by natural eye movements and how it presents a hands-free, high accuracy way of interacting with smart watches using off-the-shelf devices. Finally, we developed three example interfaces built with Orbits: a music player, a notifications face plate and a missed call menu. Despite relying on moving controls -- very unusual in current HCI interfaces -- these were generally well received by participants in a third and final study.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130342108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 208
Leveraging Dual-Observable Input for Fine-Grained Thumb Interaction Using Forearm EMG 利用前臂肌电图利用双可观察输入进行细粒度拇指交互
D. Huang, Xiaoyi Zhang, T. S. Saponas, J. Fogarty, Shyamnath Gollakota
We introduce the first forearm-based EMG input system that can recognize fine-grained thumb gestures, including left swipes, right swipes, taps, long presses, and more complex thumb motions. EMG signals for thumb motions sensed from the forearm are quite weak and require significant training data to classify. We therefore also introduce a novel approach for minimally-intrusive collection of labeled training data for always-available input devices. Our dual-observable input approach is based on the insight that interaction observed by multiple devices allows recognition by a primary device (e.g., phone recognition of a left swipe gesture) to create labeled training examples for another (e.g., forearm-based EMG data labeled as a left swipe). We implement a wearable prototype with dry EMG electrodes, train with labeled demonstrations from participants using their own phones, and show that our prototype can recognize common fine-grained thumb gestures and user-defined complex gestures.
我们推出了首个基于前臂的肌电图输入系统,该系统可以识别细粒度的拇指手势,包括向左滑动、向右滑动、轻击、长按和更复杂的拇指动作。前臂感知到的拇指运动的肌电信号非常弱,需要大量的训练数据来分类。因此,我们也引入了一种新颖的方法,为始终可用的输入设备提供最小侵入性的标记训练数据收集。我们的双可观察输入方法是基于这样一种见解,即多个设备观察到的交互允许主设备识别(例如,手机识别左滑动手势)为另一个设备创建标记的训练示例(例如,标记为左滑动的基于前臂的肌电图数据)。我们使用干式肌电图电极实现了一个可穿戴原型,使用参与者使用自己的手机进行标记演示训练,并表明我们的原型可以识别常见的细粒度拇指手势和用户自定义的复杂手势。
{"title":"Leveraging Dual-Observable Input for Fine-Grained Thumb Interaction Using Forearm EMG","authors":"D. Huang, Xiaoyi Zhang, T. S. Saponas, J. Fogarty, Shyamnath Gollakota","doi":"10.1145/2807442.2807506","DOIUrl":"https://doi.org/10.1145/2807442.2807506","url":null,"abstract":"We introduce the first forearm-based EMG input system that can recognize fine-grained thumb gestures, including left swipes, right swipes, taps, long presses, and more complex thumb motions. EMG signals for thumb motions sensed from the forearm are quite weak and require significant training data to classify. We therefore also introduce a novel approach for minimally-intrusive collection of labeled training data for always-available input devices. Our dual-observable input approach is based on the insight that interaction observed by multiple devices allows recognition by a primary device (e.g., phone recognition of a left swipe gesture) to create labeled training examples for another (e.g., forearm-based EMG data labeled as a left swipe). We implement a wearable prototype with dry EMG electrodes, train with labeled demonstrations from participants using their own phones, and show that our prototype can recognize common fine-grained thumb gestures and user-defined complex gestures.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127896262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
RevoMaker: Enabling Multi-directional and Functionally-embedded 3D printing using a Rotational Cuboidal Platform RevoMaker:使用旋转立方体平台实现多向和功能嵌入式3D打印
Wei Gao, Yunbo Zhang, Diogo C. Nazzetta, K. Ramani, R. Cipra
In recent years, 3D printing has gained significant attention from the maker community, academia, and industry to support low-cost and iterative prototyping of designs. Current unidirectional extrusion systems require printing sacrificial material to support printed features such as overhangs. Furthermore, integrating functions such as sensing and actuation into these parts requires additional steps and processes to create "functional enclosures", since design functionality cannot be easily embedded into prototype printing. All of these factors result in relatively high design iteration times. We present "RevoMaker", a self-contained 3D printer that creates direct out-of-the-printer functional prototypes, using less build material and with substantially less reliance on support structures. By modifying a standard low-cost FDM printer with a revolving cuboidal platform and printing partitioned geometries around cuboidal facets, we achieve a multidirectional additive prototyping process to reduce the print and support material use. Our optimization framework considers various orientations and sizes for the cuboidal base. The mechanical, electronic, and sensory components are preassembled on the flattened laser-cut facets and enclosed inside the cuboid when closed. We demonstrate RevoMaker directly printing a variety of customized and fully-functional product prototypes, such as computer mice and toys, thus illustrating the new affordances of 3D printing for functional product design.
近年来,3D打印得到了制造商社区、学术界和工业界的极大关注,以支持低成本和迭代的设计原型。目前的单向挤压系统需要打印牺牲材料来支持打印特征,如悬垂。此外,将传感和驱动等功能集成到这些部件中需要额外的步骤和过程来创建“功能外壳”,因为设计功能不能轻易嵌入到原型打印中。所有这些因素都会导致相对较高的设计迭代时间。我们展示了“RevoMaker”,一种独立的3D打印机,可以直接创建打印机外的功能原型,使用更少的构建材料,并且大大减少了对支撑结构的依赖。通过将一个标准的低成本FDM打印机改造成一个旋转的立方体平台,并在立方体表面周围打印分割的几何形状,我们实现了一个多向的增材原型工艺,以减少打印和支撑材料的使用。我们的优化框架考虑了立方体基底的各种方向和大小。机械、电子和感官组件预先组装在平坦的激光切割面上,关闭时封闭在长方体内。我们演示RevoMaker直接打印各种定制和功能齐全的产品原型,如电脑鼠标和玩具,从而说明了3D打印在功能产品设计中的新应用。
{"title":"RevoMaker: Enabling Multi-directional and Functionally-embedded 3D printing using a Rotational Cuboidal Platform","authors":"Wei Gao, Yunbo Zhang, Diogo C. Nazzetta, K. Ramani, R. Cipra","doi":"10.1145/2807442.2807476","DOIUrl":"https://doi.org/10.1145/2807442.2807476","url":null,"abstract":"In recent years, 3D printing has gained significant attention from the maker community, academia, and industry to support low-cost and iterative prototyping of designs. Current unidirectional extrusion systems require printing sacrificial material to support printed features such as overhangs. Furthermore, integrating functions such as sensing and actuation into these parts requires additional steps and processes to create \"functional enclosures\", since design functionality cannot be easily embedded into prototype printing. All of these factors result in relatively high design iteration times. We present \"RevoMaker\", a self-contained 3D printer that creates direct out-of-the-printer functional prototypes, using less build material and with substantially less reliance on support structures. By modifying a standard low-cost FDM printer with a revolving cuboidal platform and printing partitioned geometries around cuboidal facets, we achieve a multidirectional additive prototyping process to reduce the print and support material use. Our optimization framework considers various orientations and sizes for the cuboidal base. The mechanical, electronic, and sensory components are preassembled on the flattened laser-cut facets and enclosed inside the cuboid when closed. We demonstrate RevoMaker directly printing a variety of customized and fully-functional product prototypes, such as computer mice and toys, thus illustrating the new affordances of 3D printing for functional product design.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117011780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Tiltcasting: 3D Interaction on Large Displays using a Mobile Device 倾斜投射:使用移动设备在大型显示器上进行3D交互
Krzysztof Pietroszek, James R. Wallace, E. Lank
We develop and formally evaluate a metaphor for smartphone interaction with 3D environments: Tiltcasting. Under the Tiltcasting metaphor, users interact within a rotatable 2D plane that is "cast" from their phone's interactive display into 3D space. Through an empirical validation, we show that Tiltcasting supports efficient pointing, interaction with occluded objects, disambiguation between nearby objects, and object selection and manipulation in fully addressable 3D space. Our technique out-performs existing target agnostic pointing implementations, and approaches the performance of physical pointing with an off-the-shelf smartphone.
我们开发并正式评估了智能手机与3D环境交互的隐喻:倾斜。在“倾斜投射”的比喻下,用户在一个可旋转的2D平面内进行交互,这个平面从手机的交互式显示器“投射”到3D空间。通过经验验证,我们表明倾斜支持有效的指向,与遮挡物体的交互,附近物体之间的消歧,以及在完全可寻址的3D空间中的物体选择和操作。我们的技术优于现有的目标不可知指向实现,并接近现成智能手机的物理指向性能。
{"title":"Tiltcasting: 3D Interaction on Large Displays using a Mobile Device","authors":"Krzysztof Pietroszek, James R. Wallace, E. Lank","doi":"10.1145/2807442.2807471","DOIUrl":"https://doi.org/10.1145/2807442.2807471","url":null,"abstract":"We develop and formally evaluate a metaphor for smartphone interaction with 3D environments: Tiltcasting. Under the Tiltcasting metaphor, users interact within a rotatable 2D plane that is \"cast\" from their phone's interactive display into 3D space. Through an empirical validation, we show that Tiltcasting supports efficient pointing, interaction with occluded objects, disambiguation between nearby objects, and object selection and manipulation in fully addressable 3D space. Our technique out-performs existing target agnostic pointing implementations, and approaches the performance of physical pointing with an off-the-shelf smartphone.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121542779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Projectibles: Optimizing Surface Color For Projection 投影:优化表面颜色投影
Brett R. Jones, Rajinder Sodhi, Pulkit Budhiraja, Kevin Karsch, B. Bailey, D. Forsyth
Typically video projectors display images onto white screens, which can result in a washed out image. Projectibles algorithmically control the display surface color to increase the contrast and resolution. By combining a printed image with projected light, we can create animated, high resolution, high dynamic range visual experiences for video sequences. We present two algorithms for separating an input video sequence into a printed component and projected component, maximizing the combined contrast and resolution while minimizing any visual artifacts introduced from the decomposition. We present empirical measurements of real-world results of six example video sequences, subjective viewer feedback ratings, and we discuss the benefits and limitations of Projectibles. This is the first approach to combine a static display with a dynamic display for the display of video, and the first to optimize surface color for projection of video.
通常,投影仪将图像显示在白色屏幕上,这可能会导致图像褪色。投射体算法控制显示表面颜色,以增加对比度和分辨率。通过将打印图像与投影光相结合,我们可以为视频序列创建动画,高分辨率,高动态范围的视觉体验。我们提出了两种算法,用于将输入视频序列分离为打印组件和投影组件,最大化组合对比度和分辨率,同时最小化从分解中引入的任何视觉伪影。我们提出了六个示例视频序列的实际结果的实证测量,主观观众反馈评级,我们讨论了投射物的优点和局限性。这是第一个将静态显示与动态显示相结合的视频显示方法,也是第一个优化视频投影表面颜色的方法。
{"title":"Projectibles: Optimizing Surface Color For Projection","authors":"Brett R. Jones, Rajinder Sodhi, Pulkit Budhiraja, Kevin Karsch, B. Bailey, D. Forsyth","doi":"10.1145/2807442.2807486","DOIUrl":"https://doi.org/10.1145/2807442.2807486","url":null,"abstract":"Typically video projectors display images onto white screens, which can result in a washed out image. Projectibles algorithmically control the display surface color to increase the contrast and resolution. By combining a printed image with projected light, we can create animated, high resolution, high dynamic range visual experiences for video sequences. We present two algorithms for separating an input video sequence into a printed component and projected component, maximizing the combined contrast and resolution while minimizing any visual artifacts introduced from the decomposition. We present empirical measurements of real-world results of six example video sequences, subjective viewer feedback ratings, and we discuss the benefits and limitations of Projectibles. This is the first approach to combine a static display with a dynamic display for the display of video, and the first to optimize surface color for projection of video.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131558198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
ReForm: Integrating Physical and Digital Design through Bidirectional Fabrication 改革:通过双向制造整合物理和数字设计
Christian Weichel, John Hardy, Jason Alexander, Hans-Werner Gellersen
Digital fabrication machines such as 3D printers and laser-cutters allow users to produce physical objects based on virtual models. The creation process is currently unidirectional: once an object is fabricated it is separated from its originating virtual model. Consequently, users are tied into digital modeling tools, the virtual design must be completed before fabrication, and once fabricated, re-shaping the physical object no longer influences the digital model. To provide a more flexible design process that allows objects to iteratively evolve through both digital and physical input, we introduce bidirectional fabrication. To demonstrate the concept, we built ReForm, a system that integrates digital modeling with shape input, shape output, annotation for machine commands, and visual output. By continually synchronizing the physical object and digital model it supports object versioning to allow physical changes to be undone. Through application examples, we demonstrate the benefits of ReForm to the digital fabrication process.
3D打印机和激光切割机等数字制造机器允许用户根据虚拟模型生产物理对象。目前的创建过程是单向的:一旦一个对象被制造出来,它就与它的原始虚拟模型分离了。因此,用户被捆绑到数字建模工具中,虚拟设计必须在制造之前完成,一旦制造,重新塑造物理对象不再影响数字模型。为了提供一个更灵活的设计过程,允许对象通过数字和物理输入迭代进化,我们引入了双向制造。为了演示这个概念,我们构建了ReForm,这是一个将数字建模与形状输入、形状输出、机器命令注释和视觉输出集成在一起的系统。通过持续同步物理对象和数字模型,它支持对象版本控制,从而允许撤消物理更改。通过应用实例,我们证明了改革对数字化制造过程的好处。
{"title":"ReForm: Integrating Physical and Digital Design through Bidirectional Fabrication","authors":"Christian Weichel, John Hardy, Jason Alexander, Hans-Werner Gellersen","doi":"10.1145/2807442.2807451","DOIUrl":"https://doi.org/10.1145/2807442.2807451","url":null,"abstract":"Digital fabrication machines such as 3D printers and laser-cutters allow users to produce physical objects based on virtual models. The creation process is currently unidirectional: once an object is fabricated it is separated from its originating virtual model. Consequently, users are tied into digital modeling tools, the virtual design must be completed before fabrication, and once fabricated, re-shaping the physical object no longer influences the digital model. To provide a more flexible design process that allows objects to iteratively evolve through both digital and physical input, we introduce bidirectional fabrication. To demonstrate the concept, we built ReForm, a system that integrates digital modeling with shape input, shape output, annotation for machine commands, and visual output. By continually synchronizing the physical object and digital model it supports object versioning to allow physical changes to be undone. Through application examples, we demonstrate the benefits of ReForm to the digital fabrication process.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128383598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
BackHand: Sensing Hand Gestures via Back of the Hand 反手:通过手背感应手势
Jhe-Wei Lin, Chiuan Wang, Yi Yao Huang, Kuan-Ting Chou, Hsuan-Yu Chen, Wei-Luan Tseng, Mike Y. Chen
In this paper, we explore using the back of hands for sensing hand gestures, which interferes less than glove-based approaches and provides better recognition than sensing at wrists and forearms. Our prototype, BackHand, uses an array of strain gauge sensors affixed to the back of hands, and applies machine learning techniques to recognize a variety of hand gestures. We conducted a user study with 10 participants to better understand gesture recognition accuracy and the effects of sensing locations. Results showed that sensor reading patterns differ significantly across users, but are consistent for the same user. The leave-one-user-out accuracy is low at an average of 27.4%, but reaches 95.8% average accuracy for 16 popular hand gestures when personalized for each participant. The most promising location spans the 1/8~1/4 area between the metacarpophalangeal joints (MCP, the knuckles between the hand and fingers) and the head of ulna (tip of the wrist).
在本文中,我们探索了使用手背来感知手势,这比基于手套的方法干扰更小,并且比手腕和前臂的感知提供更好的识别。我们的原型“BackHand”使用了一组固定在手背上的应变计传感器,并应用机器学习技术来识别各种手势。为了更好地理解手势识别的准确性和感应位置的影响,我们对10名参与者进行了一项用户研究。结果表明,传感器读数模式在不同用户之间差异很大,但对于同一用户是一致的。遗漏一个用户的准确率很低,平均为27.4%,但当为每个参与者个性化16种常用手势时,平均准确率达到95.8%。最有希望的位置是掌指关节(MCP,手和手指之间的指关节)和尺骨头(手腕尖)之间的1/8~1/4区域。
{"title":"BackHand: Sensing Hand Gestures via Back of the Hand","authors":"Jhe-Wei Lin, Chiuan Wang, Yi Yao Huang, Kuan-Ting Chou, Hsuan-Yu Chen, Wei-Luan Tseng, Mike Y. Chen","doi":"10.1145/2807442.2807462","DOIUrl":"https://doi.org/10.1145/2807442.2807462","url":null,"abstract":"In this paper, we explore using the back of hands for sensing hand gestures, which interferes less than glove-based approaches and provides better recognition than sensing at wrists and forearms. Our prototype, BackHand, uses an array of strain gauge sensors affixed to the back of hands, and applies machine learning techniques to recognize a variety of hand gestures. We conducted a user study with 10 participants to better understand gesture recognition accuracy and the effects of sensing locations. Results showed that sensor reading patterns differ significantly across users, but are consistent for the same user. The leave-one-user-out accuracy is low at an average of 27.4%, but reaches 95.8% average accuracy for 16 popular hand gestures when personalized for each participant. The most promising location spans the 1/8~1/4 area between the metacarpophalangeal joints (MCP, the knuckles between the hand and fingers) and the head of ulna (tip of the wrist).","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124089295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Patching Physical Objects 修补物理对象
Alexander Teibrich, Stefanie Müller, François Guimbretière, Róbert Kovács, Stefan Neubert, Patrick Baudisch
Personal fabrication is currently a one-way process: Once an object has been fabricated with a 3D printer, it cannot be changed anymore; any change requires printing a new version from scratch. The problem is that this approach ignores the nature of design iteration, i.e. that in subsequent iterations large parts of an object stay the same and only small parts change. This makes fabricating from scratch feel unnecessary and wasteful. In this paper, we propose a different approach: instead of re-printing the entire object from scratch, we suggest patching the existing object to reflect the next design iteration. We built a system on top of a 3D printer that accomplishes this: Users mount the existing object into the 3D printer, then load both the original and the modified 3D model into our software, which in turn calculates how to patch the object. After identifying which parts to remove and what to add, our system locates the existing object in the printer using the system's built-in 3D scanner. After calibrating the orientation, a mill first removes the outdated geometry, then a print head prints the new geometry in place. Since only a fraction of the entire object is refabricated, our approach reduces material consumption and plastic waste (for our example objects by 82% and 93% respectively).
个人制造目前是一个单向的过程:一旦一个物体被3D打印机制造出来,它就不能再改变了;任何更改都需要从头开始打印新版本。问题是这种方法忽略了设计迭代的本质,即在随后的迭代中,对象的大部分保持不变,只有一小部分改变。这使得从头开始制造感觉不必要和浪费。在本文中,我们提出了一种不同的方法:而不是从头开始重新打印整个对象,我们建议修补现有对象以反映下一次设计迭代。我们在3D打印机上建立了一个系统来完成这个任务:用户将现有的物体安装到3D打印机中,然后将原始和修改后的3D模型加载到我们的软件中,然后计算如何修补物体。在确定要删除哪些部件和添加哪些部件后,我们的系统使用系统内置的3D扫描仪在打印机中定位现有物体。在校准方向后,磨机首先去除过时的几何形状,然后打印头将新的几何形状打印到位。由于整个物体只有一小部分被重新制造,我们的方法减少了材料消耗和塑料浪费(对于我们的示例物体,分别减少了82%和93%)。
{"title":"Patching Physical Objects","authors":"Alexander Teibrich, Stefanie Müller, François Guimbretière, Róbert Kovács, Stefan Neubert, Patrick Baudisch","doi":"10.1145/2807442.2807467","DOIUrl":"https://doi.org/10.1145/2807442.2807467","url":null,"abstract":"Personal fabrication is currently a one-way process: Once an object has been fabricated with a 3D printer, it cannot be changed anymore; any change requires printing a new version from scratch. The problem is that this approach ignores the nature of design iteration, i.e. that in subsequent iterations large parts of an object stay the same and only small parts change. This makes fabricating from scratch feel unnecessary and wasteful. In this paper, we propose a different approach: instead of re-printing the entire object from scratch, we suggest patching the existing object to reflect the next design iteration. We built a system on top of a 3D printer that accomplishes this: Users mount the existing object into the 3D printer, then load both the original and the modified 3D model into our software, which in turn calculates how to patch the object. After identifying which parts to remove and what to add, our system locates the existing object in the printer using the system's built-in 3D scanner. After calibrating the orientation, a mill first removes the outdated geometry, then a print head prints the new geometry in place. Since only a fraction of the entire object is refabricated, our approach reduces material consumption and plastic waste (for our example objects by 82% and 93% respectively).","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123525473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Codeopticon: Real-Time, One-To-Many Human Tutoring for Computer Programming Codeopticon:计算机编程的实时、一对多人工辅导
Philip J. Guo
One-on-one tutoring from a human expert is an effective way for novices to overcome learning barriers in complex domains such as computer programming. But there are usually far fewer experts than learners. To enable a single expert to help more learners at once, we built Codeopticon, an interface that enables a programming tutor to monitor and chat with dozens of learners in real time. Each learner codes in a workspace that consists of an editor, compiler, and visual debugger. The tutor sees a real-time view of each learner's actions on a dashboard, with each learner's workspace summarized in a tile. At a glance, the tutor can see how learners are editing and debugging their code, and what errors they are encountering. The dashboard automatically reshuffles tiles so that the most active learners are always in the tutor's main field of view. When the tutor sees that a particular learner needs help, they can open an embedded chat window to start a one-on-one conversation. A user study showed that 8 first-time Codeopticon users successfully tutored anonymous learners from 54 countries in a naturalistic online setting. On average, in a 30-minute session, each tutor monitored 226 learners, started 12 conversations, exchanged 47 chats, and helped 2.4 learners.
在计算机编程等复杂领域,专家一对一的辅导是新手克服学习障碍的有效途径。但专家通常比学习者少得多。为了使一个专家能够同时帮助更多的学习者,我们建立了Codeopticon,一个界面,使编程导师能够实时监控和与数十个学习者聊天。每个学习器在一个由编辑器、编译器和可视化调试器组成的工作空间中编码。导师可以在仪表板上看到每个学习者动作的实时视图,每个学习者的工作空间汇总在一个磁贴中。导师一眼就能看到学习者是如何编辑和调试他们的代码的,以及他们遇到了什么错误。仪表板会自动重新洗牌,以便最活跃的学习者总是在导师的主要视野中。当导师发现某个特定的学习者需要帮助时,他们可以打开一个嵌入式聊天窗口,开始一对一的对话。一项用户研究表明,8名首次使用Codeopticon的用户在一个自然的在线环境中成功地指导了来自54个国家的匿名学习者。平均而言,在30分钟的课程中,每位导师监控226名学习者,开始12次对话,交流47次聊天,帮助2.4名学习者。
{"title":"Codeopticon: Real-Time, One-To-Many Human Tutoring for Computer Programming","authors":"Philip J. Guo","doi":"10.1145/2807442.2807469","DOIUrl":"https://doi.org/10.1145/2807442.2807469","url":null,"abstract":"One-on-one tutoring from a human expert is an effective way for novices to overcome learning barriers in complex domains such as computer programming. But there are usually far fewer experts than learners. To enable a single expert to help more learners at once, we built Codeopticon, an interface that enables a programming tutor to monitor and chat with dozens of learners in real time. Each learner codes in a workspace that consists of an editor, compiler, and visual debugger. The tutor sees a real-time view of each learner's actions on a dashboard, with each learner's workspace summarized in a tile. At a glance, the tutor can see how learners are editing and debugging their code, and what errors they are encountering. The dashboard automatically reshuffles tiles so that the most active learners are always in the tutor's main field of view. When the tutor sees that a particular learner needs help, they can open an embedded chat window to start a one-on-one conversation. A user study showed that 8 first-time Codeopticon users successfully tutored anonymous learners from 54 countries in a naturalistic online setting. On average, in a 30-minute session, each tutor monitored 226 learners, started 12 conversations, exchanged 47 chats, and helped 2.4 learners.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122585544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
LineFORM: Actuated Curve Interfaces for Display, Interaction, and Constraint LineFORM:用于显示、交互和约束的驱动曲线接口
Ken Nakagaki, Sean Follmer, H. Ishii
In this paper we explore the design space of actuated curve interfaces, a novel class of shape changing-interfaces. Physical curves have several interesting characteristics from the perspective of interaction design: they have a variety of inherent affordances; they can easily represent abstract data; and they can act as constraints, boundaries, or borderlines. By utilizing such aspects of lines and curves, together with the added capability of shape-change, new possibilities for display, interaction and body constraint are possible. In order to investigate these possibilities we have implemented two actuated curve interfaces at different scales. LineFORM, our implementation, inspired by serpentine robotics, is comprised of a series chain of 1DOF servo motors with integrated sensors for direct manipulation. To motivate this work we present various applications such as shape changing cords, mobiles, body constraints, and data manipulation tools.
本文探讨了驱动曲线接口的设计空间,这是一类新型的形状变化接口。从交互设计的角度来看,物理曲线有几个有趣的特征:它们具有各种固有的功能;它们可以很容易地表示抽象数据;它们可以作为约束、界限或边界。通过利用线条和曲线的这些方面,加上形状变化的附加功能,显示、交互和身体约束的新可能性成为可能。为了研究这些可能性,我们在不同的尺度上实现了两个驱动曲线接口。我们的实现灵感来自蛇形机器人,由一系列带有集成传感器的1自由度伺服电机链组成,用于直接操作。为了激发这项工作,我们提出了各种应用,如形状改变线,移动设备,身体约束和数据操作工具。
{"title":"LineFORM: Actuated Curve Interfaces for Display, Interaction, and Constraint","authors":"Ken Nakagaki, Sean Follmer, H. Ishii","doi":"10.1145/2807442.2807452","DOIUrl":"https://doi.org/10.1145/2807442.2807452","url":null,"abstract":"In this paper we explore the design space of actuated curve interfaces, a novel class of shape changing-interfaces. Physical curves have several interesting characteristics from the perspective of interaction design: they have a variety of inherent affordances; they can easily represent abstract data; and they can act as constraints, boundaries, or borderlines. By utilizing such aspects of lines and curves, together with the added capability of shape-change, new possibilities for display, interaction and body constraint are possible. In order to investigate these possibilities we have implemented two actuated curve interfaces at different scales. LineFORM, our implementation, inspired by serpentine robotics, is comprised of a series chain of 1DOF servo motors with integrated sensors for direct manipulation. To motivate this work we present various applications such as shape changing cords, mobiles, body constraints, and data manipulation tools.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125401893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 90
期刊
Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1