Xavier Le Pallec, Raphaël Marvie, J. Rouillard, Jean-Claude Tarby
Programming an application which uses interactive devices located on different terminals is not easy. Programming such applications with standard Web technologies (HTTP, Javascript, Web browser) is even more difficult. However, Web applications have interesting properties like running on very different terminals, the lack of a specific installation step, the ability to evolve the application code at runtime. Our demonstration presents a support for designing multi-devices Web applications. After introducing the context of this work, we briefly describe some problems related to the design of multi-devices web application. Then, we present the toolkit we have implemented to help the development of applications based upon distant interactive devices.
{"title":"A support to multi-devices web application","authors":"Xavier Le Pallec, Raphaël Marvie, J. Rouillard, Jean-Claude Tarby","doi":"10.1145/1866218.1866235","DOIUrl":"https://doi.org/10.1145/1866218.1866235","url":null,"abstract":"Programming an application which uses interactive devices located on different terminals is not easy. Programming such applications with standard Web technologies (HTTP, Javascript, Web browser) is even more difficult. However, Web applications have interesting properties like running on very different terminals, the lack of a specific installation step, the ability to evolve the application code at runtime. Our demonstration presents a support for designing multi-devices Web applications. After introducing the context of this work, we briefly describe some problems related to the design of multi-devices web application. Then, we present the toolkit we have implemented to help the development of applications based upon distant interactive devices.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"15 1","pages":"391-392"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90446483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keywon Chung, Michael Shilman, C. Merrill, H. Ishii
Many Tangible User Interface (TUI) systems employ sensor-equipped physical objects. However they do not easily scale to users' actual environments; most everyday objects lack the necessary hardware, and modification requires hardware and software development by skilled individuals. This limits TUI creation by end users, resulting in inflexible interfaces in which the mapping of sensor input and output events cannot be easily modified reflecting the end user's wishes and circumstances. We introduce OnObject, a small device worn on the hand, which can program physical objects to respond to a set of gestural triggers. Users attach RFID tags to situated objects, grab them by the tag, and program their responses to grab, release, shake, swing, and thrust gestures using a built-in button and a microphone. In this paper, we demonstrate how novice end users including preschool children can instantly create engaging gestural object interfaces with sound feedback from toys, drawings, or clay.
{"title":"OnObject: gestural play with tagged everyday objects","authors":"Keywon Chung, Michael Shilman, C. Merrill, H. Ishii","doi":"10.1145/1866218.1866229","DOIUrl":"https://doi.org/10.1145/1866218.1866229","url":null,"abstract":"Many Tangible User Interface (TUI) systems employ sensor-equipped physical objects. However they do not easily scale to users' actual environments; most everyday objects lack the necessary hardware, and modification requires hardware and software development by skilled individuals. This limits TUI creation by end users, resulting in inflexible interfaces in which the mapping of sensor input and output events cannot be easily modified reflecting the end user's wishes and circumstances. We introduce OnObject, a small device worn on the hand, which can program physical objects to respond to a set of gestural triggers. Users attach RFID tags to situated objects, grab them by the tag, and program their responses to grab, release, shake, swing, and thrust gestures using a built-in button and a microphone. In this paper, we demonstrate how novice end users including preschool children can instantly create engaging gestural object interfaces with sound feedback from toys, drawings, or clay.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"379-380"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89848474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to use motion gestures with mobile devices it is imperative that the device be able to distinguish between input motion and everyday motion. In this abstract we present DoubleFlip, a unique motion gesture designed to act as an input delimiter for mobile motion gestures. We demonstrate that the DoubleFlip gesture is extremely resistant to false positive conditions, while still achieving high recognition accuracy. Since DoubleFlip is easy to perform and less likely to be accidentally invoked, it provides an always-active input event for mobile interaction.
{"title":"DoubleFlip: a motion gesture delimiter for interaction","authors":"J. Ruiz, Yang Li","doi":"10.1145/1866218.1866265","DOIUrl":"https://doi.org/10.1145/1866218.1866265","url":null,"abstract":"In order to use motion gestures with mobile devices it is imperative that the device be able to distinguish between input motion and everyday motion. In this abstract we present DoubleFlip, a unique motion gesture designed to act as an input delimiter for mobile motion gestures. We demonstrate that the DoubleFlip gesture is extremely resistant to false positive conditions, while still achieving high recognition accuracy. Since DoubleFlip is easy to perform and less likely to be accidentally invoked, it provides an always-active input event for mobile interaction.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"449-450"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89474452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a novel interface for selecting sounds in audio mixtures. Traditional interfaces in audio editors provide a graphical representation of sounds which is either a waveform, or some variation of a time/frequency transform. Although with these representations a user might be able to visually identify elements of sounds in a mixture, they do not facilitate object-specific editing (e.g. selecting only the voice of a singer in a song). This interface uses audio guidance from a user in order to select a target sound within a mixture. The user is asked to vocalize (or otherwise sonically represent) the desired target sound, and an automatic process identifies and isolates the elements of the mixture that best relate to the user's input. This way of pointing to specific parts of an audio stream allows a user to perform audio selections which would have been infeasible otherwise.
{"title":"User guided audio selection from complex sound mixtures","authors":"P. Smaragdis","doi":"10.1145/1622176.1622193","DOIUrl":"https://doi.org/10.1145/1622176.1622193","url":null,"abstract":"In this paper we present a novel interface for selecting sounds in audio mixtures. Traditional interfaces in audio editors provide a graphical representation of sounds which is either a waveform, or some variation of a time/frequency transform. Although with these representations a user might be able to visually identify elements of sounds in a mixture, they do not facilitate object-specific editing (e.g. selecting only the voice of a singer in a song). This interface uses audio guidance from a user in order to select a target sound within a mixture. The user is asked to vocalize (or otherwise sonically represent) the desired target sound, and an automatic process identifies and isolates the elements of the mixture that best relate to the user's input. This way of pointing to specific parts of an audio stream allows a user to perform audio selections which would have been infeasible otherwise.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"89-92"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73491891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Solovey, A. Girouard, K. Chauncey, Leanne M. Hirshfield, A. Sassaroli, F. Zheng, S. Fantini, R. Jacob
Because functional near-infrared spectroscopy (fNIRS) eases many of the restrictions of other brain sensors, it has potential to open up new possibilities for HCI research. From our experience using fNIRS technology for HCI, we identify several considerations and provide guidelines for using fNIRS in realistic HCI laboratory settings. We empirically examine whether typical human behavior (e.g. head and facial movement) or computer interaction (e.g. keyboard and mouse usage) interfere with brain measurement using fNIRS. Based on the results of our study, we establish which physical behaviors inherent in computer usage interfere with accurate fNIRS sensing of cognitive state information, which can be corrected in data analysis, and which are acceptable. With these findings, we hope to facilitate further adoption of fNIRS brain sensing technology in HCI research.
{"title":"Using fNIRS brain sensing in realistic HCI settings: experiments and guidelines","authors":"E. Solovey, A. Girouard, K. Chauncey, Leanne M. Hirshfield, A. Sassaroli, F. Zheng, S. Fantini, R. Jacob","doi":"10.1145/1622176.1622207","DOIUrl":"https://doi.org/10.1145/1622176.1622207","url":null,"abstract":"Because functional near-infrared spectroscopy (fNIRS) eases many of the restrictions of other brain sensors, it has potential to open up new possibilities for HCI research. From our experience using fNIRS technology for HCI, we identify several considerations and provide guidelines for using fNIRS in realistic HCI laboratory settings. We empirically examine whether typical human behavior (e.g. head and facial movement) or computer interaction (e.g. keyboard and mouse usage) interfere with brain measurement using fNIRS. Based on the results of our study, we establish which physical behaviors inherent in computer usage interfere with accurate fNIRS sensing of cognitive state information, which can be corrected in data analysis, and which are acceptable. With these findings, we hope to facilitate further adoption of fNIRS brain sensing technology in HCI research.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"157-166"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83576584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rotate-Scale-Translate (RST) interactions have become the de facto standard when interacting with two-dimensional (2D) contexts in single-touch and multi-touch environments. Because the use of RST has thus far focused almost entirely on 2D, there are not yet standard techniques for extending these principles into three dimensions. In this paper we describe a screen-space method which fully captures the semantics of the traditional 2D RST multi-touch interaction, but also allows us to extend these same principles into three-dimensional (3D) interaction. Just like RST allows users to directly manipulate 2D contexts with two or more points, our method allows the user to directly manipulate 3D objects with three or more points. We show some novel interactions, which take perspective into account and are thus not available in orthographic environments. Furthermore, we identify key ambiguities and unexpected behaviors that arise when performing direct manipulation in 3D and offer solutions to mitigate the difficulties each presents. Finally, we show how to extend our method to meet application-specific control objectives, as well as show our method working in some example environments.
{"title":"A screen-space formulation for 2D and 3D direct manipulation","authors":"J. Reisman, Philip L. Davidson, Jefferson Y. Han","doi":"10.1145/1622176.1622190","DOIUrl":"https://doi.org/10.1145/1622176.1622190","url":null,"abstract":"Rotate-Scale-Translate (RST) interactions have become the de facto standard when interacting with two-dimensional (2D) contexts in single-touch and multi-touch environments. Because the use of RST has thus far focused almost entirely on 2D, there are not yet standard techniques for extending these principles into three dimensions. In this paper we describe a screen-space method which fully captures the semantics of the traditional 2D RST multi-touch interaction, but also allows us to extend these same principles into three-dimensional (3D) interaction. Just like RST allows users to directly manipulate 2D contexts with two or more points, our method allows the user to directly manipulate 3D objects with three or more points. We show some novel interactions, which take perspective into account and are thus not available in orthographic environments. Furthermore, we identify key ambiguities and unexpected behaviors that arise when performing direct manipulation in 3D and offer solutions to mitigate the difficulties each presents. Finally, we show how to extend our method to meet application-specific control objectives, as well as show our method working in some example environments.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"11 1","pages":"69-78"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85350830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Dietz, Benjamin D. Eidelson, Jonathan Westhues, Steven Bathiche
A pressure sensitive computer keyboard is presented that independently senses the force level on every depressed key. The design leverages existing membrane technologies and is suitable for low-cost, high-volume manufacturing. A number of representative applications are discussed.
{"title":"A practical pressure sensitive computer keyboard","authors":"P. Dietz, Benjamin D. Eidelson, Jonathan Westhues, Steven Bathiche","doi":"10.1145/1622176.1622187","DOIUrl":"https://doi.org/10.1145/1622176.1622187","url":null,"abstract":"A pressure sensitive computer keyboard is presented that independently senses the force level on every depressed key. The design leverages existing membrane technologies and is suitable for low-cost, high-volume manufacturing. A number of representative applications are discussed.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"55-58"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76988800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. Thus, the user is required to look at the screen to interact with these devices. In this paper, we present SemFeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. Through multiple vibration motors that we attached to the backside of a mobile touch-screen device, SemFeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. Through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that SemFeel supports accurate eyes-free interactions.
{"title":"SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices","authors":"K. Yatani, K. Truong","doi":"10.1145/1622176.1622198","DOIUrl":"https://doi.org/10.1145/1622176.1622198","url":null,"abstract":"One of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. Thus, the user is required to look at the screen to interact with these devices. In this paper, we present SemFeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. Through multiple vibration motors that we attached to the backside of a mobile touch-screen device, SemFeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. Through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that SemFeel supports accurate eyes-free interactions.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"65 1","pages":"111-120"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72979757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. S. Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, J. Landay
Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.
{"title":"Enabling always-available input with muscle-computer interfaces","authors":"T. S. Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, J. Landay","doi":"10.1145/1622176.1622208","DOIUrl":"https://doi.org/10.1145/1622176.1622208","url":null,"abstract":"Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"22 1","pages":"167-176"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78719783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Time-series graphs are often used to visualize phenomena that change over time. Common tasks include comparing values at different points in time and searching for specified patterns, either exact or approximate. However, tools that support time-series graphs typically separate query specification from the actual search process, allowing users to adapt the level of similarity only after specifying the pattern. We introduce relaxed selection techniques, in which users implicitly define a level of similarity that can vary across the search pattern, while creating a search query with a single-gesture interaction. Users sketch over part of the graph, establishing the level of similarity through either spatial deviations from the graph, or the speed at which they sketch (temporal deviations). In a user study, participants were significantly faster when using our temporally relaxed selection technique than when using traditional techniques. In addition, they achieved significantly higher precision and recall with our spatially relaxed selection technique compared to traditional techniques.
{"title":"Relaxed selection techniques for querying time-series graphs","authors":"Christian Holz, Steven K. Feiner","doi":"10.1145/1622176.1622217","DOIUrl":"https://doi.org/10.1145/1622176.1622217","url":null,"abstract":"Time-series graphs are often used to visualize phenomena that change over time. Common tasks include comparing values at different points in time and searching for specified patterns, either exact or approximate. However, tools that support time-series graphs typically separate query specification from the actual search process, allowing users to adapt the level of similarity only after specifying the pattern. We introduce relaxed selection techniques, in which users implicitly define a level of similarity that can vary across the search pattern, while creating a search query with a single-gesture interaction. Users sketch over part of the graph, establishing the level of similarity through either spatial deviations from the graph, or the speed at which they sketch (temporal deviations). In a user study, participants were significantly faster when using our temporally relaxed selection technique than when using traditional techniques. In addition, they achieved significantly higher precision and recall with our spatially relaxed selection technique compared to traditional techniques.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"18 1","pages":"213-222"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72863944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}