首页 > 最新文献

Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology最新文献

英文 中文
Tomo: Wearable, Low-Cost Electrical Impedance Tomography for Hand Gesture Recognition Tomo:用于手势识别的可穿戴、低成本电阻抗断层扫描
Yang Zhang, Chris Harrison
We present Tomo, a wearable, low-cost system using Electrical Impedance Tomography (EIT) to recover the interior impedance geometry of a user's arm. This is achieved by measuring the cross-sectional impedances between all pairs of eight electrodes resting on a user's skin. Our approach is sufficiently compact and low-powered that we integrated the technology into a prototype wrist- and armband, which can monitor and classify gestures in real-time. We conducted a user study that evaluated two gesture sets, one focused on gross hand gestures and another using thumb-to-finger pinches. Our wrist location achieved 97% and 87% accuracies on these gesture sets respectively, while our arm location achieved 93% and 81%. We ultimately envision this technique being integrated into future smartwatches, allowing hand gestures and direct touch manipulation to work synergistically to support interactive tasks on small screens.
我们介绍了Tomo,一种可穿戴的低成本系统,使用电阻抗断层扫描(EIT)来恢复用户手臂的内部阻抗几何形状。这是通过测量放置在用户皮肤上的所有8对电极之间的横截面阻抗来实现的。我们的方法足够紧凑和低功耗,我们将该技术集成到一个原型手腕和臂带中,可以实时监控和分类手势。我们进行了一项用户研究,评估了两组手势,一组侧重于粗手势,另一组侧重于拇指对手指的捏捏。在这些手势集上,我们的手腕定位分别达到97%和87%的准确率,而我们的手臂定位分别达到93%和81%。我们最终设想将这项技术集成到未来的智能手表中,让手势和直接触摸操作协同工作,以支持小屏幕上的交互式任务。
{"title":"Tomo: Wearable, Low-Cost Electrical Impedance Tomography for Hand Gesture Recognition","authors":"Yang Zhang, Chris Harrison","doi":"10.1145/2807442.2807480","DOIUrl":"https://doi.org/10.1145/2807442.2807480","url":null,"abstract":"We present Tomo, a wearable, low-cost system using Electrical Impedance Tomography (EIT) to recover the interior impedance geometry of a user's arm. This is achieved by measuring the cross-sectional impedances between all pairs of eight electrodes resting on a user's skin. Our approach is sufficiently compact and low-powered that we integrated the technology into a prototype wrist- and armband, which can monitor and classify gestures in real-time. We conducted a user study that evaluated two gesture sets, one focused on gross hand gestures and another using thumb-to-finger pinches. Our wrist location achieved 97% and 87% accuracies on these gesture sets respectively, while our arm location achieved 93% and 81%. We ultimately envision this technique being integrated into future smartwatches, allowing hand gestures and direct touch manipulation to work synergistically to support interactive tasks on small screens.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"5 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120921006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 229
Push-Push: A Drag-like Operation Overlapped with a Page Transition Operation on Touch Interfaces 推-推:在触摸界面上与页面过渡操作重叠的类拖操作
Jaehyun Han, Geehyuk Lee
A page transition operation on touch interfaces is a common and frequent subtask when one conducts a drag-like operation such as selecting text and dragging an icon. Traditional page transition gestures such as scrolling and flicking gestures, however, cannot be conducted while conducting the drag-like operation since they have a confliction. We proposed Push-Push that is a new drag-like operation not in conflict with page transition operations. Thus, page transition operations could be conducted while performing Push-Push. To design Push-Push, we utilized the hover and pressed states as additional input states of touch interfaces. The results from two experiments showed that Push-Push has an advantage on increasing performance and qualitative opinions of users while reducing the subjective overload.
当执行像选择文本和拖动图标这样的拖拽操作时,触摸界面上的页面转换操作是一个常见且频繁的子任务。然而,传统的页面转换手势(如滚动和轻弹手势)不能在执行类拖动操作时执行,因为它们存在冲突。我们提出了Push-Push,这是一个新的类似拖动的操作,与页面转换操作没有冲突。因此,可以在执行Push-Push的同时进行页面转换操作。为了设计Push-Push,我们利用悬停和按压状态作为触摸界面的额外输入状态。两个实验的结果表明,Push-Push在减少主观过载的同时,在提高用户的性能和定性意见方面具有优势。
{"title":"Push-Push: A Drag-like Operation Overlapped with a Page Transition Operation on Touch Interfaces","authors":"Jaehyun Han, Geehyuk Lee","doi":"10.1145/2807442.2807457","DOIUrl":"https://doi.org/10.1145/2807442.2807457","url":null,"abstract":"A page transition operation on touch interfaces is a common and frequent subtask when one conducts a drag-like operation such as selecting text and dragging an icon. Traditional page transition gestures such as scrolling and flicking gestures, however, cannot be conducted while conducting the drag-like operation since they have a confliction. We proposed Push-Push that is a new drag-like operation not in conflict with page transition operations. Thus, page transition operations could be conducted while performing Push-Push. To design Push-Push, we utilized the hover and pressed states as additional input states of touch interfaces. The results from two experiments showed that Push-Push has an advantage on increasing performance and qualitative opinions of users while reducing the subjective overload.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114333531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
GazeProjector: Accurate Gaze Estimation and Seamless Gaze Interaction Across Multiple Displays GazeProjector:准确的凝视估计和跨多个显示器的无缝凝视交互
Christian Lander, Sven Gehring, A. Krüger, Sebastian Boring, A. Bulling
Mobile gaze-based interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy in such situa-tions remains a significant challenge. In this paper, we present GazeProjector, a system that combines (1) natural feature tracking on displays to determine the mobile eye tracker's position relative to a display with (2) accurate point-of-gaze estimation. GazeProjector allows for seam-less gaze estimation and interaction on multiple displays of arbitrary sizes independently of the user's position and orientation to the display. In a user study with 12 partici-pants we compare GazeProjector to established methods (here: visual on-screen markers and a state-of-the-art video-based motion capture system). We show that our approach is robust to varying head poses, orientations, and distances to the display, while still providing high gaze estimation accuracy across multiple displays without re-calibration for each variation. Our system represents an important step towards the vision of pervasive gaze-based interfaces.
与多个显示器的基于移动凝视的交互可以从任意位置和方向发生。然而,在这种情况下保持较高的注视估计精度仍然是一个重大挑战。在本文中,我们提出了GazeProjector系统,该系统结合了(1)显示器上的自然特征跟踪来确定移动眼动仪相对于显示器的位置,以及(2)精确的凝视点估计。GazeProjector允许在任意大小的多个显示器上进行无缝的凝视估计和交互,而不依赖于用户对显示器的位置和方向。在一项有12名参与者的用户研究中,我们将GazeProjector与现有方法(这里:视觉屏幕标记和最先进的基于视频的动作捕捉系统)进行了比较。我们表明,我们的方法对不同的头部姿势、方向和与显示器的距离具有鲁棒性,同时仍然在多个显示器上提供高凝视估计精度,而无需为每个变化重新校准。我们的系统代表了迈向普遍的基于注视的界面的重要一步。
{"title":"GazeProjector: Accurate Gaze Estimation and Seamless Gaze Interaction Across Multiple Displays","authors":"Christian Lander, Sven Gehring, A. Krüger, Sebastian Boring, A. Bulling","doi":"10.1145/2807442.2807479","DOIUrl":"https://doi.org/10.1145/2807442.2807479","url":null,"abstract":"Mobile gaze-based interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy in such situa-tions remains a significant challenge. In this paper, we present GazeProjector, a system that combines (1) natural feature tracking on displays to determine the mobile eye tracker's position relative to a display with (2) accurate point-of-gaze estimation. GazeProjector allows for seam-less gaze estimation and interaction on multiple displays of arbitrary sizes independently of the user's position and orientation to the display. In a user study with 12 partici-pants we compare GazeProjector to established methods (here: visual on-screen markers and a state-of-the-art video-based motion capture system). We show that our approach is robust to varying head poses, orientations, and distances to the display, while still providing high gaze estimation accuracy across multiple displays without re-calibration for each variation. Our system represents an important step towards the vision of pervasive gaze-based interfaces.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129667341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
SHOCam: A 3D Orbiting Algorithm SHOCam: 3D轨道算法
Michael Ortega-Binderberger, W. Stuerzlinger, Douglas Scheurich
In this paper we describe a new orbiting algorithm, called SHOCam, which enables simple, safe and visually attractive control of a camera moving around 3D objects. Compared with existing methods, SHOCam provides a more consistent mapping between the user's interaction and the path of the camera by substantially reducing variability in both camera motion and look direction. Also, we present a new orbiting method that prevents the camera from penetrating object(s), making the visual feedback -- and with it the user experience -- more pleasing and also less error prone. Finally, we present new solutions for orbiting around multiple objects and multi-scale environments.
在本文中,我们描述了一种新的轨道算法,称为SHOCam,它可以简单,安全和视觉上吸引人地控制相机在3D物体周围移动。与现有方法相比,SHOCam通过大幅减少相机运动和视线方向的可变性,在用户交互和相机路径之间提供了更一致的映射。此外,我们提出了一种新的轨道方法,可以防止相机穿透物体,使视觉反馈——以及用户体验——更令人愉悦,也更不容易出错。最后,我们提出了围绕多目标和多尺度环境运行的新解决方案。
{"title":"SHOCam: A 3D Orbiting Algorithm","authors":"Michael Ortega-Binderberger, W. Stuerzlinger, Douglas Scheurich","doi":"10.1145/2807442.2807496","DOIUrl":"https://doi.org/10.1145/2807442.2807496","url":null,"abstract":"In this paper we describe a new orbiting algorithm, called SHOCam, which enables simple, safe and visually attractive control of a camera moving around 3D objects. Compared with existing methods, SHOCam provides a more consistent mapping between the user's interaction and the path of the camera by substantially reducing variability in both camera motion and look direction. Also, we present a new orbiting method that prevents the camera from penetrating object(s), making the visual feedback -- and with it the user experience -- more pleasing and also less error prone. Finally, we present new solutions for orbiting around multiple objects and multi-scale environments.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129463842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Tracko: Ad-hoc Mobile 3D Tracking Using Bluetooth Low Energy and Inaudible Signals for Cross-Device Interaction Tracko:使用蓝牙低功耗和不清信号进行跨设备交互的Ad-hoc移动3D跟踪
Haojian Jin, Christian Holz, K. Hornbæk
While current mobile devices detect the presence of surrounding devices, they lack a truly spatial awareness to bring them into the user's natural 3D space. We present Tracko, a 3D tracking system between two or more commodity devices without added components or device synchronization. Tracko achieves this by fusing three signal types. 1) Tracko infers the presence of and rough distance to other devices from the strength of Bluetooth low energy signals. 2) Tracko exchanges a series of inaudible stereo sounds and derives a set of accurate distances between devices from the difference in their arrival times. A Kalman filter integrates both signal cues to place collocated devices in a shared 3D space, combining the robustness of Bluetooth with the accuracy of audio signals for relative 3D tracking. 3) Tracko incorporates inertial sensors to refine 3D estimates and support quick interactions. Tracko robustly tracks devices in 3D with a mean error of 6.5 cm within 0.5 m and a 15.3 cm error within 1 m, which validates Trackoffs suitability for cross-device interactions.
虽然目前的移动设备可以检测周围设备的存在,但它们缺乏真正的空间意识,无法将它们带入用户的自然3D空间。我们提出Tracko,一个在两个或多个商品设备之间的3D跟踪系统,无需添加组件或设备同步。Tracko通过融合三种信号类型来实现这一点。1) Tracko通过蓝牙低能量信号的强弱,推断出其他设备的存在和距离。Tracko交换一系列听不见的立体声,并从设备到达时间的差异中得出一组设备之间的精确距离。卡尔曼滤波器集成了两个信号线索,将配置的设备放置在共享的3D空间中,将蓝牙的鲁棒性与相对3D跟踪的音频信号的准确性相结合。3) Tracko采用惯性传感器来细化3D估计并支持快速交互。Tracko稳健地跟踪3D设备,0.5 m内的平均误差为6.5 cm, 1 m内的平均误差为15.3 cm,验证了trackoff对跨设备交互的适用性。
{"title":"Tracko: Ad-hoc Mobile 3D Tracking Using Bluetooth Low Energy and Inaudible Signals for Cross-Device Interaction","authors":"Haojian Jin, Christian Holz, K. Hornbæk","doi":"10.1145/2807442.2807475","DOIUrl":"https://doi.org/10.1145/2807442.2807475","url":null,"abstract":"While current mobile devices detect the presence of surrounding devices, they lack a truly spatial awareness to bring them into the user's natural 3D space. We present Tracko, a 3D tracking system between two or more commodity devices without added components or device synchronization. Tracko achieves this by fusing three signal types. 1) Tracko infers the presence of and rough distance to other devices from the strength of Bluetooth low energy signals. 2) Tracko exchanges a series of inaudible stereo sounds and derives a set of accurate distances between devices from the difference in their arrival times. A Kalman filter integrates both signal cues to place collocated devices in a shared 3D space, combining the robustness of Bluetooth with the accuracy of audio signals for relative 3D tracking. 3) Tracko incorporates inertial sensors to refine 3D estimates and support quick interactions. Tracko robustly tracks devices in 3D with a mean error of 6.5 cm within 0.5 m and a 15.3 cm error within 1 m, which validates Trackoffs suitability for cross-device interactions.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124140035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Gunslinger: Subtle Arms-down Mid-air Interaction 枪手:微妙的手臂向下的空中互动
Mingyu Liu, Mathieu Nancel, Daniel Vogel
We describe Gunslinger, a mid-air interaction technique using barehand postures and gestures. Unlike past work, we explore a relaxed arms-down position with both hands interacting at the sides of the body. It features "hand-cursor" feedback to communicate recognized hand posture, command mode and tracking quality; and a simple, but flexible hand posture recognizer. Although Gunslinger is suitable for many usage contexts, we focus on integrating mid-air gestures with large display touch input. We show how the Gunslinger form factor enables an interaction language that is equivalent, coherent, and compatible with large display touch input. A four-part study evaluates Midas Touch, posture recognition feedback, pointing and clicking, and general usability.
我们描述Gunslinger,一种使用徒手姿势和手势的空中互动技术。与过去的工作不同,我们探索了一个放松的手臂向下的位置,双手在身体两侧相互作用。它具有“手-光标”反馈,传达识别的手势,命令模式和跟踪质量;还有一个简单但灵活的手势识别器。虽然《Gunslinger》适用于许多使用环境,但我们专注于将空中手势与大屏幕触摸输入相结合。我们展示了Gunslinger的外形因素如何使交互语言与大型显示触摸输入等效、连贯和兼容。一项由四部分组成的研究评估了点石成金、姿势识别反馈、指向和点击以及总体可用性。
{"title":"Gunslinger: Subtle Arms-down Mid-air Interaction","authors":"Mingyu Liu, Mathieu Nancel, Daniel Vogel","doi":"10.1145/2807442.2807489","DOIUrl":"https://doi.org/10.1145/2807442.2807489","url":null,"abstract":"We describe Gunslinger, a mid-air interaction technique using barehand postures and gestures. Unlike past work, we explore a relaxed arms-down position with both hands interacting at the sides of the body. It features \"hand-cursor\" feedback to communicate recognized hand posture, command mode and tracking quality; and a simple, but flexible hand posture recognizer. Although Gunslinger is suitable for many usage contexts, we focus on integrating mid-air gestures with large display touch input. We show how the Gunslinger form factor enables an interaction language that is equivalent, coherent, and compatible with large display touch input. A four-part study evaluates Midas Touch, posture recognition feedback, pointing and clicking, and general usability.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128986060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 98
Looking through the Eye of the Mouse: A Simple Method for Measuring End-to-end Latency using an Optical Mouse 通过鼠标的眼睛看:一种使用光学鼠标测量端到端延迟的简单方法
Géry Casiez, Stéphane Conversy, M. Falce, Stéphane Huot, Nicolas Roussel
We present a simple method for measuring end-to-end latency in graphical user interfaces. The method works with most optical mice and allows accurate and real time latency measures up to 5 times per second. In addition, the technique allows easy insertion of probes at different places in the system I.e. mouse events listeners - to investigate the sources of latency. After presenting the measurement method and our methodology, we detail the measures we performed on different systems, toolkits and applications. Results show that latency is affected by the operating system and system load. Substantial differences are found between C++/GLUT and C++/Qt or Java/Swing implementations, as well as between web browsers.
我们提出了一种测量图形用户界面端到端延迟的简单方法。该方法适用于大多数光学鼠标,并允许精确和实时的延迟测量高达每秒5次。此外,该技术允许在系统的不同位置(例如鼠标事件侦听器)轻松插入探针,以调查延迟的来源。在介绍了测量方法和我们的方法论之后,我们详细介绍了我们在不同的系统、工具包和应用程序上执行的测量。结果表明,延迟受操作系统和系统负载的影响。在c++ /GLUT和c++ /Qt或Java/Swing实现之间,以及在web浏览器之间,可以发现实质性的差异。
{"title":"Looking through the Eye of the Mouse: A Simple Method for Measuring End-to-end Latency using an Optical Mouse","authors":"Géry Casiez, Stéphane Conversy, M. Falce, Stéphane Huot, Nicolas Roussel","doi":"10.1145/2807442.2807454","DOIUrl":"https://doi.org/10.1145/2807442.2807454","url":null,"abstract":"We present a simple method for measuring end-to-end latency in graphical user interfaces. The method works with most optical mice and allows accurate and real time latency measures up to 5 times per second. In addition, the technique allows easy insertion of probes at different places in the system I.e. mouse events listeners - to investigate the sources of latency. After presenting the measurement method and our methodology, we detail the measures we performed on different systems, toolkits and applications. Results show that latency is affected by the operating system and system load. Substantial differences are found between C++/GLUT and C++/Qt or Java/Swing implementations, as well as between web browsers.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132906153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Procedural Modeling Using Autoencoder Networks 使用自动编码器网络的过程建模
M. E. Yümer, P. Asente, R. Mech, L. Kara
Procedural modeling systems allow users to create high quality content through parametric, conditional or stochastic rule sets. While such approaches create an abstraction layer by freeing the user from direct geometry editing, the nonlinear nature and the high number of parameters associated with such design spaces result in arduous modeling experiences for non-expert users. We propose a method to enable intuitive exploration of such high dimensional procedural modeling spaces within a lower dimensional space learned through autoencoder network training. Our method automatically generates a representative training dataset from the procedural modeling rule set based on shape similarity features. We then leverage the samples in this dataset to train an autoencoder neural network, while also structuring the learned lower dimensional space for continuous exploration with respect to shape features. We demonstrate the efficacy our method with user studies where designers create content with more than 10-fold faster speeds using our system compared to the classic procedural modeling interface.
程序建模系统允许用户通过参数、条件或随机规则集创建高质量的内容。虽然这种方法通过将用户从直接的几何编辑中解放出来,创建了一个抽象层,但与这种设计空间相关的非线性性质和大量参数导致非专业用户的艰苦建模体验。我们提出了一种方法,可以在通过自编码器网络训练学习的低维空间中直观地探索这种高维过程建模空间。该方法基于形状相似特征,从过程建模规则集自动生成具有代表性的训练数据集。然后,我们利用该数据集中的样本来训练一个自编码器神经网络,同时还构建学习到的低维空间,以便根据形状特征进行持续探索。我们通过用户研究证明了我们的方法的有效性,在用户研究中,设计师使用我们的系统创建内容的速度比传统的过程建模界面快10倍以上。
{"title":"Procedural Modeling Using Autoencoder Networks","authors":"M. E. Yümer, P. Asente, R. Mech, L. Kara","doi":"10.1145/2807442.2807448","DOIUrl":"https://doi.org/10.1145/2807442.2807448","url":null,"abstract":"Procedural modeling systems allow users to create high quality content through parametric, conditional or stochastic rule sets. While such approaches create an abstraction layer by freeing the user from direct geometry editing, the nonlinear nature and the high number of parameters associated with such design spaces result in arduous modeling experiences for non-expert users. We propose a method to enable intuitive exploration of such high dimensional procedural modeling spaces within a lower dimensional space learned through autoencoder network training. Our method automatically generates a representative training dataset from the procedural modeling rule set based on shape similarity features. We then leverage the samples in this dataset to train an autoencoder neural network, while also structuring the learned lower dimensional space for continuous exploration with respect to shape features. We demonstrate the efficacy our method with user studies where designers create content with more than 10-fold faster speeds using our system compared to the classic procedural modeling interface.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Biometric Touch Sensing: Seamlessly Augmenting Each Touch with Continuous Authentication 生物识别触摸传感:通过连续认证无缝地增强每次触摸
Christian Holz, Marius Knaust
Current touch devices separate user authentication from regular interaction, for example by displaying modal login screens before device usage or prompting for in-app passwords, which interrupts the interaction flow. We propose biometric touch sensing, a new approach to representing touch events that enables commodity devices to seamlessly integrate authentication into interaction: From each touch, the touchscreen senses the 2D input coordinates and at the same time obtains biometric features that identify the user. Our approach makes authentication during interaction transparent to the user, yet ensures secure interaction at all times. To implement this on today's devices, our watch prototype Bioamp senses the impedance profile of the user's wrist and modulates a signal onto the user's body through skin using a periodic electric signal. This signal affects the capacitive values touchscreens measure upon touch, allowing devices to identify users on each touch. We integrate our approach into Windows 8 and discuss and demonstrate it in the context of various use cases, including access permissions and protecting private screen contents on personal and shared devices.
当前的触控设备将用户认证与常规交互分离开来,例如在设备使用前显示模式登录屏幕或提示输入应用内密码,这会中断交互流程。我们提出了生物识别触摸传感,这是一种表示触摸事件的新方法,使商品设备能够无缝地将身份验证集成到交互中:从每次触摸中,触摸屏感知2D输入坐标,同时获得识别用户的生物特征。我们的方法使交互过程中的身份验证对用户透明,但始终确保安全交互。为了在今天的设备上实现这一点,我们的手表原型Bioamp感知用户手腕的阻抗曲线,并使用周期性电信号通过皮肤将信号调制到用户的身体上。该信号影响触摸屏在触摸时测量的电容值,允许设备在每次触摸时识别用户。我们将我们的方法整合到Windows 8中,并在各种用例的背景下进行讨论和演示,包括访问权限和保护个人和共享设备上的私人屏幕内容。
{"title":"Biometric Touch Sensing: Seamlessly Augmenting Each Touch with Continuous Authentication","authors":"Christian Holz, Marius Knaust","doi":"10.1145/2807442.2807458","DOIUrl":"https://doi.org/10.1145/2807442.2807458","url":null,"abstract":"Current touch devices separate user authentication from regular interaction, for example by displaying modal login screens before device usage or prompting for in-app passwords, which interrupts the interaction flow. We propose biometric touch sensing, a new approach to representing touch events that enables commodity devices to seamlessly integrate authentication into interaction: From each touch, the touchscreen senses the 2D input coordinates and at the same time obtains biometric features that identify the user. Our approach makes authentication during interaction transparent to the user, yet ensures secure interaction at all times. To implement this on today's devices, our watch prototype Bioamp senses the impedance profile of the user's wrist and modulates a signal onto the user's body through skin using a periodic electric signal. This signal affects the capacitive values touchscreens measure upon touch, allowing devices to identify users on each touch. We integrate our approach into Windows 8 and discuss and demonstrate it in the context of various use cases, including access permissions and protecting private screen contents on personal and shared devices.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116897647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization DataTone:在数据可视化的自然语言接口中管理歧义
Tong Gao, Mira Dontcheva, Eytan Adar, Zhicheng Liu, Karrie Karahalios
Answering questions with data is a difficult and time-consuming process. Visual dashboards and templates make it easy to get started, but asking more sophisticated questions often requires learning a tool designed for expert analysts. Natural language interaction allows users to ask questions directly in complex programs without having to learn how to use an interface. However, natural language is often ambiguous. In this work we propose a mixed-initiative approach to managing ambiguity in natural language interfaces for data visualization. We model ambiguity throughout the process of turning a natural language query into a visualization and use algorithmic disambiguation coupled with interactive ambiguity widgets. These widgets allow the user to resolve ambiguities by surfacing system decisions at the point where the ambiguity matters. Corrections are stored as constraints and influence subsequent queries. We have implemented these ideas in a system, DataTone. In a comparative study, we find that DataTone is easy to learn and lets users ask questions without worrying about syntax and proper question form.
用数据回答问题是一个困难且耗时的过程。可视化仪表板和模板使入门变得容易,但提出更复杂的问题通常需要学习为专家分析师设计的工具。自然语言交互允许用户在复杂的程序中直接提问,而无需学习如何使用界面。然而,自然语言往往是模棱两可的。在这项工作中,我们提出了一种混合主动的方法来管理数据可视化的自然语言接口中的歧义。我们在将自然语言查询转化为可视化的整个过程中对歧义进行建模,并使用算法消歧义以及交互式歧义小部件。这些小部件允许用户通过在不明确的地方显示系统决策来解决不明确的问题。更正作为约束存储,并影响后续查询。我们在DataTone系统中实现了这些想法。在对比研究中,我们发现DataTone易于学习,可以让用户在不担心语法和正确的问句形式的情况下提问。
{"title":"DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization","authors":"Tong Gao, Mira Dontcheva, Eytan Adar, Zhicheng Liu, Karrie Karahalios","doi":"10.1145/2807442.2807478","DOIUrl":"https://doi.org/10.1145/2807442.2807478","url":null,"abstract":"Answering questions with data is a difficult and time-consuming process. Visual dashboards and templates make it easy to get started, but asking more sophisticated questions often requires learning a tool designed for expert analysts. Natural language interaction allows users to ask questions directly in complex programs without having to learn how to use an interface. However, natural language is often ambiguous. In this work we propose a mixed-initiative approach to managing ambiguity in natural language interfaces for data visualization. We model ambiguity throughout the process of turning a natural language query into a visualization and use algorithmic disambiguation coupled with interactive ambiguity widgets. These widgets allow the user to resolve ambiguities by surfacing system decisions at the point where the ambiguity matters. Corrections are stored as constraints and influence subsequent queries. We have implemented these ideas in a system, DataTone. In a comparative study, we find that DataTone is easy to learn and lets users ask questions without worrying about syntax and proper question form.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124212032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 204
期刊
Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1