Detecting states of frustration among students engaged in learning activities is critical to the success of teaching assistance tools. We examine the relationship between a student's pen activity and his/her state of frustration while solving handwritten problems. Based on a user study involving mathematics problems, we found that our detection method was able to detect student frustration with a precision of 87% and a recall of 90%. We also identified several particularly discriminative features, including writing stroke number, erased stroke number, pen activity time, and air stroke speed.
{"title":"Detecting student frustration based on handwriting behavior","authors":"H. Asai, H. Yamana","doi":"10.1145/2508468.2514718","DOIUrl":"https://doi.org/10.1145/2508468.2514718","url":null,"abstract":"Detecting states of frustration among students engaged in learning activities is critical to the success of teaching assistance tools. We examine the relationship between a student's pen activity and his/her state of frustration while solving handwritten problems. Based on a user study involving mathematics problems, we found that our detection method was able to detect student frustration with a precision of 87% and a recall of 90%. We also identified several particularly discriminative features, including writing stroke number, erased stroke number, pen activity time, and air stroke speed.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115228063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Holman, Jesse Burstyn, R. Brotman, A. Younkin, Roel Vertegaal
Commercially available development platforms for flexible displays are not designed for rapid prototyping. To create a deformable interface, one that uses a functional flexible display, designers must be familiar with embedded hardware systems and corresponding programming. We introduce Flexkit, a platform that allows designers to rapidly prototype deformable applications. With Flexkit, designers can rapidly prototype using a thin-film electrophoretic display, one that is "Plug and Play". To demonstrate Flexkit's ease-of-use, we present its application in PaperTab's design iteration as a case study. We further discuss how dithering can be used to increase the frame rate of electrophoretic displays from 1fps to 5fps.
{"title":"Flexkit: a rapid prototyping platform for flexible displays","authors":"David Holman, Jesse Burstyn, R. Brotman, A. Younkin, Roel Vertegaal","doi":"10.1145/2508468.2514934","DOIUrl":"https://doi.org/10.1145/2508468.2514934","url":null,"abstract":"Commercially available development platforms for flexible displays are not designed for rapid prototyping. To create a deformable interface, one that uses a functional flexible display, designers must be familiar with embedded hardware systems and corresponding programming. We introduce Flexkit, a platform that allows designers to rapidly prototype deformable applications. With Flexkit, designers can rapidly prototype using a thin-film electrophoretic display, one that is \"Plug and Play\". To demonstrate Flexkit's ease-of-use, we present its application in PaperTab's design iteration as a case study. We further discuss how dithering can be used to increase the frame rate of electrophoretic displays from 1fps to 5fps.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"32 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115346546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhang Zhao, Yongqiang Qin, Yang Liu, Siqi Liu, Yuanchun Shi
We present QOOK, an interactive reading system that incorporates the benefits of both physical and digital books to facilitate active reading. QOOK uses a top-projector to create digital contents on a blank paper book. By detecting markers attached to each page, QOOK allows users to flip pages just like they would with a real book. Electronic functions such as keyword searching, highlighting and bookmarking are included to provide users with additional digital assistance. With a Kinect sensor that recognizes touch gestures, QOOK enables people to use these electronic functions directly with their fingers. The combination of the electronic functions of the virtual interface and free-form interaction with the physical book creates a natural reading experience, providing an opportunity for faster navigation between pages and better understanding of the book contents.
{"title":"QOOK: a new physical-virtual coupling experience for active reading","authors":"Yuhang Zhao, Yongqiang Qin, Yang Liu, Siqi Liu, Yuanchun Shi","doi":"10.1145/2508468.2514928","DOIUrl":"https://doi.org/10.1145/2508468.2514928","url":null,"abstract":"We present QOOK, an interactive reading system that incorporates the benefits of both physical and digital books to facilitate active reading. QOOK uses a top-projector to create digital contents on a blank paper book. By detecting markers attached to each page, QOOK allows users to flip pages just like they would with a real book. Electronic functions such as keyword searching, highlighting and bookmarking are included to provide users with additional digital assistance. With a Kinect sensor that recognizes touch gestures, QOOK enables people to use these electronic functions directly with their fingers. The combination of the electronic functions of the virtual interface and free-form interaction with the physical book creates a natural reading experience, providing an opportunity for faster navigation between pages and better understanding of the book contents.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116733287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is our pleasure to welcome you to the 26th Annual ACM Symposium on User Interface Software and Technology (UIST) 2013, held from October 8-11th, in the historic town and University of St Andrews, Scotland, United Kingdom. UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from many areas, including web and graphical interfaces, new input and output devices, information visualization, sensing technologies, interactive displays, tabletop and tangible computing, interaction techniques, augmented and virtual reality, ubiquitous computing, and computer supported cooperative work. The single-track program and intimate size, makes UIST 2013 an ideal place to exchange results at the cutting edge of user interfaces research, to meet friends and colleagues, and to forge future collaborations. We received a record 317 paper submissions from more than 30 countries. After a thorough review process, the program committee accepted 62 papers (19.5%). Each anonymous submission was first reviewed by three external reviewers, and meta-reviews were provided by two program committee members. If any of the five reviewers deemed a submission to pass a rejection threshold we asked the authors to submit a short rebuttal addressing the reviewers' concerns. The program committee met in person in Pittsburgh, PA, on May 30-31, 2013, to select the papers for the conference. Submissions were finally accepted only after the authors provided a final revision addressing the committee's comments. In addition to the presentations of accepted papers, this year's program includes a keynote by Raffaello D'Andrea (ETH Zurich) on feedback control systems for autonomous machines. A great line up of posters, demos, (the ninth) annual Doctoral Symposium, and (the fifth) annual Student Innovation Contest (this year focusing on programmable water pumps called Pumpspark) complete the program. We hope you enjoy all aspects of the UIST 2013 program, and that you get to enjoy our wonderful venues and that your discussions and interactions prove fruitful.
{"title":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","authors":"S. Izadi, A. Quigley, I. Poupyrev, T. Igarashi","doi":"10.1145/2508468","DOIUrl":"https://doi.org/10.1145/2508468","url":null,"abstract":"It is our pleasure to welcome you to the 26th Annual ACM Symposium on User Interface Software and Technology (UIST) 2013, held from October 8-11th, in the historic town and University of St Andrews, Scotland, United Kingdom. \u0000 \u0000UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from many areas, including web and graphical interfaces, new input and output devices, information visualization, sensing technologies, interactive displays, tabletop and tangible computing, interaction techniques, augmented and virtual reality, ubiquitous computing, and computer supported cooperative work. The single-track program and intimate size, makes UIST 2013 an ideal place to exchange results at the cutting edge of user interfaces research, to meet friends and colleagues, and to forge future collaborations. \u0000 \u0000We received a record 317 paper submissions from more than 30 countries. After a thorough review process, the program committee accepted 62 papers (19.5%). Each anonymous submission was first reviewed by three external reviewers, and meta-reviews were provided by two program committee members. If any of the five reviewers deemed a submission to pass a rejection threshold we asked the authors to submit a short rebuttal addressing the reviewers' concerns. The program committee met in person in Pittsburgh, PA, on May 30-31, 2013, to select the papers for the conference. Submissions were finally accepted only after the authors provided a final revision addressing the committee's comments. \u0000 \u0000In addition to the presentations of accepted papers, this year's program includes a keynote by Raffaello D'Andrea (ETH Zurich) on feedback control systems for autonomous machines. A great line up of posters, demos, (the ninth) annual Doctoral Symposium, and (the fifth) annual Student Innovation Contest (this year focusing on programmable water pumps called Pumpspark) complete the program. We hope you enjoy all aspects of the UIST 2013 program, and that you get to enjoy our wonderful venues and that your discussions and interactions prove fruitful.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128831516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laboratory studies present difficulties in the understanding of how usage evolves over time. Employed observations are obtrusive and not naturalistic. Our system employs a remote capture tool that provides longitudinal low-level interaction data. It is easily deployable into any Web site allowing deployments in-the-wild and is completely unobtrusive. Web application interfaces are designed assuming users' goals. Requirement specifications contain well defined use cases and scenarios that drive design and subsequent optimisations. Users' interaction patterns outside the expected ones are not considered. This results in an optimisation for a stylised user rather than a real one. A bottom-up analysis from low-level interaction data makes possible the emergence of users' tasks. Similarities among users can be found and solutions that are effective for real users can be designed. Factors such as learnability and how interface changes affect users are difficult to observe in laboratory studies. Our solution makes it possible, adding a longitudinal point of view to traditional laboratory studies. The capture tool is deployed in real world Web applications capturing in-situ data from users. These data serve to explore analysis and visualisation possibilities. We present an example of the exploration results with one Web application.
{"title":"Identifying emergent behaviours from longitudinal web use","authors":"Aitor Apaolaza","doi":"10.1145/2508468.2508475","DOIUrl":"https://doi.org/10.1145/2508468.2508475","url":null,"abstract":"Laboratory studies present difficulties in the understanding of how usage evolves over time. Employed observations are obtrusive and not naturalistic. Our system employs a remote capture tool that provides longitudinal low-level interaction data. It is easily deployable into any Web site allowing deployments in-the-wild and is completely unobtrusive. Web application interfaces are designed assuming users' goals. Requirement specifications contain well defined use cases and scenarios that drive design and subsequent optimisations. Users' interaction patterns outside the expected ones are not considered. This results in an optimisation for a stylised user rather than a real one. A bottom-up analysis from low-level interaction data makes possible the emergence of users' tasks. Similarities among users can be found and solutions that are effective for real users can be designed. Factors such as learnability and how interface changes affect users are difficult to observe in laboratory studies. Our solution makes it possible, adding a longitudinal point of view to traditional laboratory studies. The capture tool is deployed in real world Web applications capturing in-situ data from users. These data serve to explore analysis and visualisation possibilities. We present an example of the exploration results with one Web application.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127087853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheng Zhang, Aman Parnami, Caleb Southern, Edison Thomaz, Gabriel Reyes, R. Arriaga, G. Abowd
We present BackTap, an interaction technique that extends the input modality of a smartphone to add four distinct tap locations on the back case of a smartphone. The BackTap interaction can be used eyes-free with the phone in a user's pocket, purse, or armband while walking, or while holding the phone with two hands so as not to occlude the screen with the fingers. We employ three common built-in sensors on the smartphone (microphone, gyroscope, and accelerometer) and feature a lightweight heuristic implementation. In an evaluation with eleven participants and three usage conditions, users were able to tap four distinct points with 92% to 96% accuracy.
{"title":"BackTap: robust four-point tapping on the back of an off-the-shelf smartphone","authors":"Cheng Zhang, Aman Parnami, Caleb Southern, Edison Thomaz, Gabriel Reyes, R. Arriaga, G. Abowd","doi":"10.1145/2508468.2514735","DOIUrl":"https://doi.org/10.1145/2508468.2514735","url":null,"abstract":"We present BackTap, an interaction technique that extends the input modality of a smartphone to add four distinct tap locations on the back case of a smartphone. The BackTap interaction can be used eyes-free with the phone in a user's pocket, purse, or armband while walking, or while holding the phone with two hands so as not to occlude the screen with the fingers. We employ three common built-in sensors on the smartphone (microphone, gyroscope, and accelerometer) and feature a lightweight heuristic implementation. In an evaluation with eleven participants and three usage conditions, users were able to tap four distinct points with 92% to 96% accuracy.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121546367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Almost every computational system a person interacts with keeps a detailed log of that person's behavior. The possibility of this data promises a breadth of new service opportunities for improving people's lives through deep personalization, tools to manage aspects of their personal wellbeing, and services that support identity construction. However, the way that this data is collected and managed today introduces several challenges that severely limit the utility of this rich data. This thesis maps out a computational ecosystem for personal behavioral data through the design, implementation, and evaluation of Phenom, a web service that factors out common activities in making inferences from personal behavioral data. The primary benefits of Phenom include: a structured process for aggregating and representing user data; support for developing models based on personal behavioral data; and a unified API for accessing inferences made by models within Phenom. To evaluate Phenom for ease of use and versatility, an external set of developers will create example applications with it.
{"title":"Enabling an ecosystem of personal behavioral data","authors":"Jason Wiese","doi":"10.1145/2508468.2508472","DOIUrl":"https://doi.org/10.1145/2508468.2508472","url":null,"abstract":"Almost every computational system a person interacts with keeps a detailed log of that person's behavior. The possibility of this data promises a breadth of new service opportunities for improving people's lives through deep personalization, tools to manage aspects of their personal wellbeing, and services that support identity construction. However, the way that this data is collected and managed today introduces several challenges that severely limit the utility of this rich data. This thesis maps out a computational ecosystem for personal behavioral data through the design, implementation, and evaluation of Phenom, a web service that factors out common activities in making inferences from personal behavioral data. The primary benefits of Phenom include: a structured process for aggregating and representing user data; support for developing models based on personal behavioral data; and a unified API for accessing inferences made by models within Phenom. To evaluate Phenom for ease of use and versatility, an external set of developers will create example applications with it.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127840900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Piotr Wojtczuk, T. David Binnie, A. Armitage, T. Chamberlain, C. Giebeler
A sensing device for a touchless, hand gesture, user interface based on an inexpensive passive infrared pyroelectric detector array is presented. The 2 x 2 element sensor responds to changing infrared radiation generated by hand movement over the array. The sensing range is from a few millimetres to tens of centimetres. The low power consumption (< 50 μW) enables the sensor's use in mobile devices and in low energy applications. Detection rates of 77% have been demonstrated using a prototype system that differentiates the four main hand motion trajectories -- up, down, left and right. This device allows greater non-contact control capability without an increase in size, cost or power consumption over existing on/off devices.
{"title":"A touchless passive infrared gesture sensor","authors":"Piotr Wojtczuk, T. David Binnie, A. Armitage, T. Chamberlain, C. Giebeler","doi":"10.1145/2508468.2514713","DOIUrl":"https://doi.org/10.1145/2508468.2514713","url":null,"abstract":"A sensing device for a touchless, hand gesture, user interface based on an inexpensive passive infrared pyroelectric detector array is presented. The 2 x 2 element sensor responds to changing infrared radiation generated by hand movement over the array. The sensing range is from a few millimetres to tens of centimetres. The low power consumption (< 50 μW) enables the sensor's use in mobile devices and in low energy applications. Detection rates of 77% have been demonstrated using a prototype system that differentiates the four main hand motion trajectories -- up, down, left and right. This device allows greater non-contact control capability without an increase in size, cost or power consumption over existing on/off devices.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"5 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116661604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to the rapid growth of data volume, it's increasingly complicated to present and navigate large amount of data in a convenient method on mobile devices with a small screen. To address this challenge, we present a new method which displays cluster information in a hierarchy pattern and interact with them by eyes' movement captured by the front camera of mobile devices. The key of this system is providing users a new interacting method to navigate and select data quickly by eyes without any additional equipment.
{"title":"A cluster information navigate method by gaze tracking","authors":"Dawei Cheng, Danqiong Li, Liang Fang","doi":"10.1145/2508468.2514710","DOIUrl":"https://doi.org/10.1145/2508468.2514710","url":null,"abstract":"According to the rapid growth of data volume, it's increasingly complicated to present and navigate large amount of data in a convenient method on mobile devices with a small screen. To address this challenge, we present a new method which displays cluster information in a hierarchy pattern and interact with them by eyes' movement captured by the front camera of mobile devices. The key of this system is providing users a new interacting method to navigate and select data quickly by eyes without any additional equipment.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123253142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Touchscreen interfaces for small display devices have several limitations: the act of touching the screen occludes the display, interface elements like keyboards consume precious display real estate, and even simple tasks like document navigation - which the user performs effortlessly using a mouse and keyboard - require repeated actions like pinch-and-zoom with touch input. More recently, smart glasses with limited or no touch input are starting to emerge commercially. However, the primary input to these systems has been voice. In this paper, we explore the space around the device as a means of touchless gestural input to devices with small or no displays. Capturing gestural input in the surrounding volume requires sensing the human hand. To achieve gestural input we have built Mime [3] -- a compact, low-power 3D sensor for short-range gestural control of small display devices. Our sensor is based on a novel signal processing pipeline and is built using standard off-the-shelf components. Using Mime we demonstrated a variety of application scenarios including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions. In my thesis, I will continue to extend sensor capabilities to support new interaction styles.
{"title":"Sensor design and interaction techniques for gestural input to smart glasses and mobile devices","authors":"Andrea Colaco","doi":"10.1145/2508468.2508474","DOIUrl":"https://doi.org/10.1145/2508468.2508474","url":null,"abstract":"Touchscreen interfaces for small display devices have several limitations: the act of touching the screen occludes the display, interface elements like keyboards consume precious display real estate, and even simple tasks like document navigation - which the user performs effortlessly using a mouse and keyboard - require repeated actions like pinch-and-zoom with touch input. More recently, smart glasses with limited or no touch input are starting to emerge commercially. However, the primary input to these systems has been voice. In this paper, we explore the space around the device as a means of touchless gestural input to devices with small or no displays. Capturing gestural input in the surrounding volume requires sensing the human hand. To achieve gestural input we have built Mime [3] -- a compact, low-power 3D sensor for short-range gestural control of small display devices. Our sensor is based on a novel signal processing pipeline and is built using standard off-the-shelf components. Using Mime we demonstrated a variety of application scenarios including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions. In my thesis, I will continue to extend sensor capabilities to support new interaction styles.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130698568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}