We present Tomo, a wearable, low-cost system using Electrical Impedance Tomography (EIT) to recover the interior impedance geometry of a user's arm. This is achieved by measuring the cross-sectional impedances between all pairs of eight electrodes resting on a user's skin. Our approach is sufficiently compact and low-powered that we integrated the technology into a prototype wrist- and armband, which can monitor and classify gestures in real-time. We conducted a user study that evaluated two gesture sets, one focused on gross hand gestures and another using thumb-to-finger pinches. Our wrist location achieved 97% and 87% accuracies on these gesture sets respectively, while our arm location achieved 93% and 81%. We ultimately envision this technique being integrated into future smartwatches, allowing hand gestures and direct touch manipulation to work synergistically to support interactive tasks on small screens.
{"title":"Tomo: Wearable, Low-Cost Electrical Impedance Tomography for Hand Gesture Recognition","authors":"Yang Zhang, Chris Harrison","doi":"10.1145/2807442.2807480","DOIUrl":"https://doi.org/10.1145/2807442.2807480","url":null,"abstract":"We present Tomo, a wearable, low-cost system using Electrical Impedance Tomography (EIT) to recover the interior impedance geometry of a user's arm. This is achieved by measuring the cross-sectional impedances between all pairs of eight electrodes resting on a user's skin. Our approach is sufficiently compact and low-powered that we integrated the technology into a prototype wrist- and armband, which can monitor and classify gestures in real-time. We conducted a user study that evaluated two gesture sets, one focused on gross hand gestures and another using thumb-to-finger pinches. Our wrist location achieved 97% and 87% accuracies on these gesture sets respectively, while our arm location achieved 93% and 81%. We ultimately envision this technique being integrated into future smartwatches, allowing hand gestures and direct touch manipulation to work synergistically to support interactive tasks on small screens.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"5 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120921006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A page transition operation on touch interfaces is a common and frequent subtask when one conducts a drag-like operation such as selecting text and dragging an icon. Traditional page transition gestures such as scrolling and flicking gestures, however, cannot be conducted while conducting the drag-like operation since they have a confliction. We proposed Push-Push that is a new drag-like operation not in conflict with page transition operations. Thus, page transition operations could be conducted while performing Push-Push. To design Push-Push, we utilized the hover and pressed states as additional input states of touch interfaces. The results from two experiments showed that Push-Push has an advantage on increasing performance and qualitative opinions of users while reducing the subjective overload.
{"title":"Push-Push: A Drag-like Operation Overlapped with a Page Transition Operation on Touch Interfaces","authors":"Jaehyun Han, Geehyuk Lee","doi":"10.1145/2807442.2807457","DOIUrl":"https://doi.org/10.1145/2807442.2807457","url":null,"abstract":"A page transition operation on touch interfaces is a common and frequent subtask when one conducts a drag-like operation such as selecting text and dragging an icon. Traditional page transition gestures such as scrolling and flicking gestures, however, cannot be conducted while conducting the drag-like operation since they have a confliction. We proposed Push-Push that is a new drag-like operation not in conflict with page transition operations. Thus, page transition operations could be conducted while performing Push-Push. To design Push-Push, we utilized the hover and pressed states as additional input states of touch interfaces. The results from two experiments showed that Push-Push has an advantage on increasing performance and qualitative opinions of users while reducing the subjective overload.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114333531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Lander, Sven Gehring, A. Krüger, Sebastian Boring, A. Bulling
Mobile gaze-based interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy in such situa-tions remains a significant challenge. In this paper, we present GazeProjector, a system that combines (1) natural feature tracking on displays to determine the mobile eye tracker's position relative to a display with (2) accurate point-of-gaze estimation. GazeProjector allows for seam-less gaze estimation and interaction on multiple displays of arbitrary sizes independently of the user's position and orientation to the display. In a user study with 12 partici-pants we compare GazeProjector to established methods (here: visual on-screen markers and a state-of-the-art video-based motion capture system). We show that our approach is robust to varying head poses, orientations, and distances to the display, while still providing high gaze estimation accuracy across multiple displays without re-calibration for each variation. Our system represents an important step towards the vision of pervasive gaze-based interfaces.
{"title":"GazeProjector: Accurate Gaze Estimation and Seamless Gaze Interaction Across Multiple Displays","authors":"Christian Lander, Sven Gehring, A. Krüger, Sebastian Boring, A. Bulling","doi":"10.1145/2807442.2807479","DOIUrl":"https://doi.org/10.1145/2807442.2807479","url":null,"abstract":"Mobile gaze-based interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy in such situa-tions remains a significant challenge. In this paper, we present GazeProjector, a system that combines (1) natural feature tracking on displays to determine the mobile eye tracker's position relative to a display with (2) accurate point-of-gaze estimation. GazeProjector allows for seam-less gaze estimation and interaction on multiple displays of arbitrary sizes independently of the user's position and orientation to the display. In a user study with 12 partici-pants we compare GazeProjector to established methods (here: visual on-screen markers and a state-of-the-art video-based motion capture system). We show that our approach is robust to varying head poses, orientations, and distances to the display, while still providing high gaze estimation accuracy across multiple displays without re-calibration for each variation. Our system represents an important step towards the vision of pervasive gaze-based interfaces.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129667341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Ortega-Binderberger, W. Stuerzlinger, Douglas Scheurich
In this paper we describe a new orbiting algorithm, called SHOCam, which enables simple, safe and visually attractive control of a camera moving around 3D objects. Compared with existing methods, SHOCam provides a more consistent mapping between the user's interaction and the path of the camera by substantially reducing variability in both camera motion and look direction. Also, we present a new orbiting method that prevents the camera from penetrating object(s), making the visual feedback -- and with it the user experience -- more pleasing and also less error prone. Finally, we present new solutions for orbiting around multiple objects and multi-scale environments.
{"title":"SHOCam: A 3D Orbiting Algorithm","authors":"Michael Ortega-Binderberger, W. Stuerzlinger, Douglas Scheurich","doi":"10.1145/2807442.2807496","DOIUrl":"https://doi.org/10.1145/2807442.2807496","url":null,"abstract":"In this paper we describe a new orbiting algorithm, called SHOCam, which enables simple, safe and visually attractive control of a camera moving around 3D objects. Compared with existing methods, SHOCam provides a more consistent mapping between the user's interaction and the path of the camera by substantially reducing variability in both camera motion and look direction. Also, we present a new orbiting method that prevents the camera from penetrating object(s), making the visual feedback -- and with it the user experience -- more pleasing and also less error prone. Finally, we present new solutions for orbiting around multiple objects and multi-scale environments.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129463842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While current mobile devices detect the presence of surrounding devices, they lack a truly spatial awareness to bring them into the user's natural 3D space. We present Tracko, a 3D tracking system between two or more commodity devices without added components or device synchronization. Tracko achieves this by fusing three signal types. 1) Tracko infers the presence of and rough distance to other devices from the strength of Bluetooth low energy signals. 2) Tracko exchanges a series of inaudible stereo sounds and derives a set of accurate distances between devices from the difference in their arrival times. A Kalman filter integrates both signal cues to place collocated devices in a shared 3D space, combining the robustness of Bluetooth with the accuracy of audio signals for relative 3D tracking. 3) Tracko incorporates inertial sensors to refine 3D estimates and support quick interactions. Tracko robustly tracks devices in 3D with a mean error of 6.5 cm within 0.5 m and a 15.3 cm error within 1 m, which validates Trackoffs suitability for cross-device interactions.
虽然目前的移动设备可以检测周围设备的存在,但它们缺乏真正的空间意识,无法将它们带入用户的自然3D空间。我们提出Tracko,一个在两个或多个商品设备之间的3D跟踪系统,无需添加组件或设备同步。Tracko通过融合三种信号类型来实现这一点。1) Tracko通过蓝牙低能量信号的强弱,推断出其他设备的存在和距离。Tracko交换一系列听不见的立体声,并从设备到达时间的差异中得出一组设备之间的精确距离。卡尔曼滤波器集成了两个信号线索,将配置的设备放置在共享的3D空间中,将蓝牙的鲁棒性与相对3D跟踪的音频信号的准确性相结合。3) Tracko采用惯性传感器来细化3D估计并支持快速交互。Tracko稳健地跟踪3D设备,0.5 m内的平均误差为6.5 cm, 1 m内的平均误差为15.3 cm,验证了trackoff对跨设备交互的适用性。
{"title":"Tracko: Ad-hoc Mobile 3D Tracking Using Bluetooth Low Energy and Inaudible Signals for Cross-Device Interaction","authors":"Haojian Jin, Christian Holz, K. Hornbæk","doi":"10.1145/2807442.2807475","DOIUrl":"https://doi.org/10.1145/2807442.2807475","url":null,"abstract":"While current mobile devices detect the presence of surrounding devices, they lack a truly spatial awareness to bring them into the user's natural 3D space. We present Tracko, a 3D tracking system between two or more commodity devices without added components or device synchronization. Tracko achieves this by fusing three signal types. 1) Tracko infers the presence of and rough distance to other devices from the strength of Bluetooth low energy signals. 2) Tracko exchanges a series of inaudible stereo sounds and derives a set of accurate distances between devices from the difference in their arrival times. A Kalman filter integrates both signal cues to place collocated devices in a shared 3D space, combining the robustness of Bluetooth with the accuracy of audio signals for relative 3D tracking. 3) Tracko incorporates inertial sensors to refine 3D estimates and support quick interactions. Tracko robustly tracks devices in 3D with a mean error of 6.5 cm within 0.5 m and a 15.3 cm error within 1 m, which validates Trackoffs suitability for cross-device interactions.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124140035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe Gunslinger, a mid-air interaction technique using barehand postures and gestures. Unlike past work, we explore a relaxed arms-down position with both hands interacting at the sides of the body. It features "hand-cursor" feedback to communicate recognized hand posture, command mode and tracking quality; and a simple, but flexible hand posture recognizer. Although Gunslinger is suitable for many usage contexts, we focus on integrating mid-air gestures with large display touch input. We show how the Gunslinger form factor enables an interaction language that is equivalent, coherent, and compatible with large display touch input. A four-part study evaluates Midas Touch, posture recognition feedback, pointing and clicking, and general usability.
{"title":"Gunslinger: Subtle Arms-down Mid-air Interaction","authors":"Mingyu Liu, Mathieu Nancel, Daniel Vogel","doi":"10.1145/2807442.2807489","DOIUrl":"https://doi.org/10.1145/2807442.2807489","url":null,"abstract":"We describe Gunslinger, a mid-air interaction technique using barehand postures and gestures. Unlike past work, we explore a relaxed arms-down position with both hands interacting at the sides of the body. It features \"hand-cursor\" feedback to communicate recognized hand posture, command mode and tracking quality; and a simple, but flexible hand posture recognizer. Although Gunslinger is suitable for many usage contexts, we focus on integrating mid-air gestures with large display touch input. We show how the Gunslinger form factor enables an interaction language that is equivalent, coherent, and compatible with large display touch input. A four-part study evaluates Midas Touch, posture recognition feedback, pointing and clicking, and general usability.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128986060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Géry Casiez, Stéphane Conversy, M. Falce, Stéphane Huot, Nicolas Roussel
We present a simple method for measuring end-to-end latency in graphical user interfaces. The method works with most optical mice and allows accurate and real time latency measures up to 5 times per second. In addition, the technique allows easy insertion of probes at different places in the system I.e. mouse events listeners - to investigate the sources of latency. After presenting the measurement method and our methodology, we detail the measures we performed on different systems, toolkits and applications. Results show that latency is affected by the operating system and system load. Substantial differences are found between C++/GLUT and C++/Qt or Java/Swing implementations, as well as between web browsers.
{"title":"Looking through the Eye of the Mouse: A Simple Method for Measuring End-to-end Latency using an Optical Mouse","authors":"Géry Casiez, Stéphane Conversy, M. Falce, Stéphane Huot, Nicolas Roussel","doi":"10.1145/2807442.2807454","DOIUrl":"https://doi.org/10.1145/2807442.2807454","url":null,"abstract":"We present a simple method for measuring end-to-end latency in graphical user interfaces. The method works with most optical mice and allows accurate and real time latency measures up to 5 times per second. In addition, the technique allows easy insertion of probes at different places in the system I.e. mouse events listeners - to investigate the sources of latency. After presenting the measurement method and our methodology, we detail the measures we performed on different systems, toolkits and applications. Results show that latency is affected by the operating system and system load. Substantial differences are found between C++/GLUT and C++/Qt or Java/Swing implementations, as well as between web browsers.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132906153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Procedural modeling systems allow users to create high quality content through parametric, conditional or stochastic rule sets. While such approaches create an abstraction layer by freeing the user from direct geometry editing, the nonlinear nature and the high number of parameters associated with such design spaces result in arduous modeling experiences for non-expert users. We propose a method to enable intuitive exploration of such high dimensional procedural modeling spaces within a lower dimensional space learned through autoencoder network training. Our method automatically generates a representative training dataset from the procedural modeling rule set based on shape similarity features. We then leverage the samples in this dataset to train an autoencoder neural network, while also structuring the learned lower dimensional space for continuous exploration with respect to shape features. We demonstrate the efficacy our method with user studies where designers create content with more than 10-fold faster speeds using our system compared to the classic procedural modeling interface.
{"title":"Procedural Modeling Using Autoencoder Networks","authors":"M. E. Yümer, P. Asente, R. Mech, L. Kara","doi":"10.1145/2807442.2807448","DOIUrl":"https://doi.org/10.1145/2807442.2807448","url":null,"abstract":"Procedural modeling systems allow users to create high quality content through parametric, conditional or stochastic rule sets. While such approaches create an abstraction layer by freeing the user from direct geometry editing, the nonlinear nature and the high number of parameters associated with such design spaces result in arduous modeling experiences for non-expert users. We propose a method to enable intuitive exploration of such high dimensional procedural modeling spaces within a lower dimensional space learned through autoencoder network training. Our method automatically generates a representative training dataset from the procedural modeling rule set based on shape similarity features. We then leverage the samples in this dataset to train an autoencoder neural network, while also structuring the learned lower dimensional space for continuous exploration with respect to shape features. We demonstrate the efficacy our method with user studies where designers create content with more than 10-fold faster speeds using our system compared to the classic procedural modeling interface.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current touch devices separate user authentication from regular interaction, for example by displaying modal login screens before device usage or prompting for in-app passwords, which interrupts the interaction flow. We propose biometric touch sensing, a new approach to representing touch events that enables commodity devices to seamlessly integrate authentication into interaction: From each touch, the touchscreen senses the 2D input coordinates and at the same time obtains biometric features that identify the user. Our approach makes authentication during interaction transparent to the user, yet ensures secure interaction at all times. To implement this on today's devices, our watch prototype Bioamp senses the impedance profile of the user's wrist and modulates a signal onto the user's body through skin using a periodic electric signal. This signal affects the capacitive values touchscreens measure upon touch, allowing devices to identify users on each touch. We integrate our approach into Windows 8 and discuss and demonstrate it in the context of various use cases, including access permissions and protecting private screen contents on personal and shared devices.
{"title":"Biometric Touch Sensing: Seamlessly Augmenting Each Touch with Continuous Authentication","authors":"Christian Holz, Marius Knaust","doi":"10.1145/2807442.2807458","DOIUrl":"https://doi.org/10.1145/2807442.2807458","url":null,"abstract":"Current touch devices separate user authentication from regular interaction, for example by displaying modal login screens before device usage or prompting for in-app passwords, which interrupts the interaction flow. We propose biometric touch sensing, a new approach to representing touch events that enables commodity devices to seamlessly integrate authentication into interaction: From each touch, the touchscreen senses the 2D input coordinates and at the same time obtains biometric features that identify the user. Our approach makes authentication during interaction transparent to the user, yet ensures secure interaction at all times. To implement this on today's devices, our watch prototype Bioamp senses the impedance profile of the user's wrist and modulates a signal onto the user's body through skin using a periodic electric signal. This signal affects the capacitive values touchscreens measure upon touch, allowing devices to identify users on each touch. We integrate our approach into Windows 8 and discuss and demonstrate it in the context of various use cases, including access permissions and protecting private screen contents on personal and shared devices.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116897647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tong Gao, Mira Dontcheva, Eytan Adar, Zhicheng Liu, Karrie Karahalios
Answering questions with data is a difficult and time-consuming process. Visual dashboards and templates make it easy to get started, but asking more sophisticated questions often requires learning a tool designed for expert analysts. Natural language interaction allows users to ask questions directly in complex programs without having to learn how to use an interface. However, natural language is often ambiguous. In this work we propose a mixed-initiative approach to managing ambiguity in natural language interfaces for data visualization. We model ambiguity throughout the process of turning a natural language query into a visualization and use algorithmic disambiguation coupled with interactive ambiguity widgets. These widgets allow the user to resolve ambiguities by surfacing system decisions at the point where the ambiguity matters. Corrections are stored as constraints and influence subsequent queries. We have implemented these ideas in a system, DataTone. In a comparative study, we find that DataTone is easy to learn and lets users ask questions without worrying about syntax and proper question form.
{"title":"DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization","authors":"Tong Gao, Mira Dontcheva, Eytan Adar, Zhicheng Liu, Karrie Karahalios","doi":"10.1145/2807442.2807478","DOIUrl":"https://doi.org/10.1145/2807442.2807478","url":null,"abstract":"Answering questions with data is a difficult and time-consuming process. Visual dashboards and templates make it easy to get started, but asking more sophisticated questions often requires learning a tool designed for expert analysts. Natural language interaction allows users to ask questions directly in complex programs without having to learn how to use an interface. However, natural language is often ambiguous. In this work we propose a mixed-initiative approach to managing ambiguity in natural language interfaces for data visualization. We model ambiguity throughout the process of turning a natural language query into a visualization and use algorithmic disambiguation coupled with interactive ambiguity widgets. These widgets allow the user to resolve ambiguities by surfacing system decisions at the point where the ambiguity matters. Corrections are stored as constraints and influence subsequent queries. We have implemented these ideas in a system, DataTone. In a comparative study, we find that DataTone is easy to learn and lets users ask questions without worrying about syntax and proper question form.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124212032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}