{"title":"Session details: Mobile","authors":"Nicolai Marquardt","doi":"10.1145/3254700","DOIUrl":"https://doi.org/10.1145/3254700","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133192737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we explore human-computer interaction for carving, building upon our previous work with the FreeD digital sculpting device. We contribute a new tool design (FreeD V2), with a novel set of interaction techniques for the fabrication of static models: personalized tool paths, manual overriding, and physical merging of virtual models. We also present techniques for fabricating dynamic models, which may be altered directly or parametrically during fabrication. We demonstrate a semi-autonomous operation and evaluate the performance of the tool. We end by discussing synergistic cooperation between human and machine to ensure accuracy while preserving the expressiveness of manual practice.
{"title":"Human-computer interaction for hybrid carving","authors":"Amit Zoran, Roy Shilkrot, J. Paradiso","doi":"10.1145/2501988.2502023","DOIUrl":"https://doi.org/10.1145/2501988.2502023","url":null,"abstract":"In this paper we explore human-computer interaction for carving, building upon our previous work with the FreeD digital sculpting device. We contribute a new tool design (FreeD V2), with a novel set of interaction techniques for the fabrication of static models: personalized tool paths, manual overriding, and physical merging of virtual models. We also present techniques for fabricating dynamic models, which may be altered directly or parametrically during fabrication. We demonstrate a semi-autonomous operation and evaluate the performance of the tool. We end by discussing synergistic cooperation between human and machine to ensure accuracy while preserving the expressiveness of manual practice.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"9 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134447038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Walter S. Lasecki, Rachel Wesley, Jeffrey Nichols, A. Kulkarni, James F. Allen, Jeffrey P. Bigham
Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93% of user queries appropriately, and staying on-topic in over 95% of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.
{"title":"Chorus: a crowd-powered conversational assistant","authors":"Walter S. Lasecki, Rachel Wesley, Jeffrey Nichols, A. Kulkarni, James F. Allen, Jeffrey P. Bigham","doi":"10.1145/2501988.2502057","DOIUrl":"https://doi.org/10.1145/2501988.2502057","url":null,"abstract":"Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93% of user queries appropriately, and staying on-topic in over 95% of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123764823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ethan Fast, Colleen Lee, A. Aiken, Michael S. Bernstein, D. Koller, Eric Smith
Large online courses often assign problems that are easy to grade because they have a fixed set of solutions (such as multiple choice), but grading and guiding students is more difficult in problem domains that have an unbounded number of correct answers. One such domain is derivations: sequences of logical steps commonly used in assignments for technical, mathematical and scientific subjects. We present DeduceIt, a system for creating, grading, and analyzing derivation assignments in any formal domain. DeduceIt supports assignments in any logical formalism, provides students with incremental feedback, and aggregates student paths through each proof to produce instructor analytics. DeduceIt benefits from checking thousands of derivations on the web: it introduces a proof cache, a novel data structure which leverages a crowd of students to decrease the cost of checking derivations and providing real-time, constructive feedback. We evaluate DeduceIt with 990 students in an online compilers course, finding students take advantage of its incremental feedback and instructors benefit from its structured insights into course topics. Our work suggests that automated reasoning can extend online assignments and large-scale education to many new domains.
{"title":"Crowd-scale interactive formal reasoning and analytics","authors":"Ethan Fast, Colleen Lee, A. Aiken, Michael S. Bernstein, D. Koller, Eric Smith","doi":"10.1145/2501988.2502028","DOIUrl":"https://doi.org/10.1145/2501988.2502028","url":null,"abstract":"Large online courses often assign problems that are easy to grade because they have a fixed set of solutions (such as multiple choice), but grading and guiding students is more difficult in problem domains that have an unbounded number of correct answers. One such domain is derivations: sequences of logical steps commonly used in assignments for technical, mathematical and scientific subjects. We present DeduceIt, a system for creating, grading, and analyzing derivation assignments in any formal domain. DeduceIt supports assignments in any logical formalism, provides students with incremental feedback, and aggregates student paths through each proof to produce instructor analytics. DeduceIt benefits from checking thousands of derivations on the web: it introduces a proof cache, a novel data structure which leverages a crowd of students to decrease the cost of checking derivations and providing real-time, constructive feedback. We evaluate DeduceIt with 990 students in an online compilers course, finding students take advantage of its incremental feedback and instructors benefit from its structured insights into course topics. Our work suggests that automated reasoning can extend online assignments and large-scale education to many new domains.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134253014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. E. Karagozler, I. Poupyrev, G. Fedder, Yuri Suzuki
We present a new energy harvesting technology that generates electrical energy from a user's interactions with paper-like materials. The energy harvesters are flexible, light, and inexpensive, and they utilize a user's gestures such as tapping, touching, rubbing and sliding to generate electrical energy. The harvested energy is then used to actuate LEDs, e-paper displays and various other devices to create novel interactive applications, such as enhancing books and other printed media with interactivity.
{"title":"Paper generators: harvesting energy from touching, rubbing and sliding","authors":"M. E. Karagozler, I. Poupyrev, G. Fedder, Yuri Suzuki","doi":"10.1145/2501988.2502054","DOIUrl":"https://doi.org/10.1145/2501988.2502054","url":null,"abstract":"We present a new energy harvesting technology that generates electrical energy from a user's interactions with paper-like materials. The energy harvesters are flexible, light, and inexpensive, and they utilize a user's gestures such as tapping, touching, rubbing and sliding to generate electrical energy. The harvested energy is then used to actuate LEDs, e-paper displays and various other devices to create novel interactive applications, such as enhancing books and other printed media with interactivity.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120987860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Jackson, James Nicholson, Gerrit Stoeckigt, Rebecca Wrobel, Anja Thieme, P. Olivier
Panopticon is a video surrogate system that displays multiple sub-sequences in parallel to present a rapid overview of the entire sequence to the user. A novel, precisely animated arrangement slides thumbnails to provide a consistent spatiotemporal layout while allowing any sub-sequence of the original video to be watched without interruption. Furthermore, this output can be generated offline as a highly efficient repeated animation loop, making it suitable for resource-constrained environments, such as web-based interaction. Two versions of Panopticon were evaluated using three different types of video footage with the aim of determining the usability of the proposed system. Results demonstrated an advantage over another surrogate with surveillance footage in terms of search times and this advantage was further improved with Panopticon 2. Eye tracking data suggests that Panopticon's advantage stems from the animated timeline that users heavily rely on.
{"title":"Panopticon: a parallel video overview system","authors":"D. Jackson, James Nicholson, Gerrit Stoeckigt, Rebecca Wrobel, Anja Thieme, P. Olivier","doi":"10.1145/2501988.2502038","DOIUrl":"https://doi.org/10.1145/2501988.2502038","url":null,"abstract":"Panopticon is a video surrogate system that displays multiple sub-sequences in parallel to present a rapid overview of the entire sequence to the user. A novel, precisely animated arrangement slides thumbnails to provide a consistent spatiotemporal layout while allowing any sub-sequence of the original video to be watched without interruption. Furthermore, this output can be generated offline as a highly efficient repeated animation loop, making it suitable for resource-constrained environments, such as web-based interaction. Two versions of Panopticon were evaluated using three different types of video footage with the aim of determining the usability of the proposed system. Results demonstrated an advantage over another surrogate with surveillance footage in terms of search times and this advantage was further improved with Panopticon 2. Eye tracking data suggests that Panopticon's advantage stems from the animated timeline that users heavily rely on.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124210729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile applications can offer improved user experience through the use of novel modalities and user context. However, these new input dimensions often require recognition-based techniques, with which mobile app developers or designers may not be familiar. Furthermore, the recruiting, data collection and labeling, necessary for using these techniques, are usually time-consuming and expensive. We present CrowdLearner, a framework based on crowdsourcing to automatically generate recognizers using mobile sensor input such as accelerometer or touchscreen readings. CrowdLearner allows a developer to easily create a recognition task, distribute it to the crowd, and monitor its progress as more data becomes available. We deployed CrowdLearner to a crowd of 72 mobile users over a period of 2.5 weeks. We evaluated the system by experimenting with 6 recognition tasks concerning motion gestures, touchscreen gestures, and activity recognition. The experimental results indicated that CrowdLearner enables a developer to quickly acquire a usable recognizer for their specific application by spending a moderate amount of money, often less than $10, in a short period of time, often in the order of 2 hours. Our exploration also revealed challenges and provided insights into the design of future crowdsourcing systems for machine learning tasks.
{"title":"CrowdLearner: rapidly creating mobile recognizers using crowdsourcing","authors":"Shahriyar Amini, Y. Li","doi":"10.1145/2501988.2502029","DOIUrl":"https://doi.org/10.1145/2501988.2502029","url":null,"abstract":"Mobile applications can offer improved user experience through the use of novel modalities and user context. However, these new input dimensions often require recognition-based techniques, with which mobile app developers or designers may not be familiar. Furthermore, the recruiting, data collection and labeling, necessary for using these techniques, are usually time-consuming and expensive. We present CrowdLearner, a framework based on crowdsourcing to automatically generate recognizers using mobile sensor input such as accelerometer or touchscreen readings. CrowdLearner allows a developer to easily create a recognition task, distribute it to the crowd, and monitor its progress as more data becomes available. We deployed CrowdLearner to a crowd of 72 mobile users over a period of 2.5 weeks. We evaluated the system by experimenting with 6 recognition tasks concerning motion gestures, touchscreen gestures, and activity recognition. The experimental results indicated that CrowdLearner enables a developer to quickly acquire a usable recognizer for their specific application by spending a moderate amount of money, often less than $10, in a short period of time, often in the order of 2 hours. Our exploration also revealed challenges and provided insights into the design of future crowdsourcing systems for machine learning tasks.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"69 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122688539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a technology for designing curved display surfaces that can both display information and sense two dimensions of human touch. It is based on 3D printed optics, where the surface of the display is constructed as a bundle of printed light pipes, that direct images from an arbitrary planar image source to the surface of the display. This effectively decouples the display surface and image source, allowing to iterate the design of displays without requiring changes to the complex electronics and optics of the device. In addition, the same optical elements also direct light from the surface of the display back to the image sensor allowing for touch input and proximity detection of a hand relative to the display surface. The resulting technology is effective in designing compact, efficient displays of a small size; this has been applied in the design of interactive animated eyes.
{"title":"PAPILLON: designing curved display surfaces with printed optics","authors":"Eric Brockmeyer, I. Poupyrev, S. Hudson","doi":"10.1145/2501988.2502027","DOIUrl":"https://doi.org/10.1145/2501988.2502027","url":null,"abstract":"We present a technology for designing curved display surfaces that can both display information and sense two dimensions of human touch. It is based on 3D printed optics, where the surface of the display is constructed as a bundle of printed light pipes, that direct images from an arbitrary planar image source to the surface of the display. This effectively decouples the display surface and image source, allowing to iterate the design of displays without requiring changes to the complex electronics and optics of the device. In addition, the same optical elements also direct light from the surface of the display back to the image sensor allowing for touch input and proximity detection of a hand relative to the display surface. The resulting technology is effective in designing compact, efficient displays of a small size; this has been applied in the design of interactive animated eyes.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116448123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatial perception is a challenging task for people who are blind due to the limited functionality and sensing range of hands. We present GIST, a wearable gestural interface that offers spatial perception functionality through the novel appropriation of the user's hands into versatile sensing rods. Using a wearable depth-sensing camera, GIST analyzes the visible physical space and allows blind users to access spatial information about this space using different hand gestures. By allowing blind users to directly explore the physical space using gestures, GIST allows for the closest mapping between augmented and physical reality, which facilitates spatial interaction. A user study with eight blind users evaluates GIST in its ability to help perform everyday tasks that rely on spatial perception, such as grabbing an object or interacting with a person. Results of our study may help develop new gesture based assistive applications.
{"title":"GIST: a gestural interface for remote nonvisual spatial perception","authors":"V. Khambadkar, Eelke Folmer","doi":"10.1145/2501988.2502047","DOIUrl":"https://doi.org/10.1145/2501988.2502047","url":null,"abstract":"Spatial perception is a challenging task for people who are blind due to the limited functionality and sensing range of hands. We present GIST, a wearable gestural interface that offers spatial perception functionality through the novel appropriation of the user's hands into versatile sensing rods. Using a wearable depth-sensing camera, GIST analyzes the visible physical space and allows blind users to access spatial information about this space using different hand gestures. By allowing blind users to directly explore the physical space using gestures, GIST allows for the closest mapping between augmented and physical reality, which facilitates spatial interaction. A user study with eight blind users evaluates GIST in its ability to help perform everyday tasks that rely on spatial perception, such as grabbing an object or interacting with a person. Results of our study may help develop new gesture based assistive applications.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"477 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131875270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents dePENd, a novel interactive system that assists in sketching using regular pens and paper. Our system utilizes the ferromagnetic feature of the metal tip of a regular ballpoint pen. The computer controlling the X and Y positions of the magnet under the surface of the table provides entirely new drawing experiences. By controlling the movements of a pen and presenting haptic guides, the system allows a user to easily draw diagrams and pictures consisting of lines and circles, which are difficult to create by free-hand drawing. Moreover, the system also allows users to freely edit and arrange prescribed pictures. This is expected to reduce the resistance to drawing and promote users' creativity. In addition, we propose a communication tool using two dePENd systems that is expected to enhance the drawing skills of users. The functions of this system enable users to utilize interactive applications such as copying and redrawing drafted pictures or scaling the pictures using a digital pen. Furthermore, we implement the system and evaluate its technical features. In this paper, we describe the details of the design and implementations of the device, along with applications, technical evaluations, and future prospects.
{"title":"dePENd: augmented handwriting system using ferromagnetism of a ballpoint pen","authors":"Junichi Yamaoka, Y. Kakehi","doi":"10.1145/2501988.2502017","DOIUrl":"https://doi.org/10.1145/2501988.2502017","url":null,"abstract":"This paper presents dePENd, a novel interactive system that assists in sketching using regular pens and paper. Our system utilizes the ferromagnetic feature of the metal tip of a regular ballpoint pen. The computer controlling the X and Y positions of the magnet under the surface of the table provides entirely new drawing experiences. By controlling the movements of a pen and presenting haptic guides, the system allows a user to easily draw diagrams and pictures consisting of lines and circles, which are difficult to create by free-hand drawing. Moreover, the system also allows users to freely edit and arrange prescribed pictures. This is expected to reduce the resistance to drawing and promote users' creativity. In addition, we propose a communication tool using two dePENd systems that is expected to enhance the drawing skills of users. The functions of this system enable users to utilize interactive applications such as copying and redrawing drafted pictures or scaling the pictures using a digital pen. Furthermore, we implement the system and evaluate its technical features. In this paper, we describe the details of the design and implementations of the device, along with applications, technical evaluations, and future prospects.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127556982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}