S. Chaudhuri, E. Kalogerakis, S. Giguere, T. Funkhouser
We present AttribIt, an approach for people to create visual content using relative semantic attributes expressed in linguistic terms. During an off-line processing step, AttribIt learns semantic attributes for design components that reflect the high-level intent people may have for creating content in a domain (e.g. adjectives such as "dangerous", "scary" or "strong") and ranks them according to the strength of each learned attribute. Then, during an interactive design session, a person can explore different combinations of visual components using commands based on relative attributes (e.g. "make this part more dangerous"). Novel designs are assembled in real-time as the strengths of selected attributes are varied, enabling rapid, in-situ exploration of candidate designs. We applied this approach to 3D modeling and web design. Experiments suggest this interface is an effective alternative for novices performing tasks with high-level design goals.
{"title":"Attribit: content creation with semantic attributes","authors":"S. Chaudhuri, E. Kalogerakis, S. Giguere, T. Funkhouser","doi":"10.1145/2501988.2502008","DOIUrl":"https://doi.org/10.1145/2501988.2502008","url":null,"abstract":"We present AttribIt, an approach for people to create visual content using relative semantic attributes expressed in linguistic terms. During an off-line processing step, AttribIt learns semantic attributes for design components that reflect the high-level intent people may have for creating content in a domain (e.g. adjectives such as \"dangerous\", \"scary\" or \"strong\") and ranks them according to the strength of each learned attribute. Then, during an interactive design session, a person can explore different combinations of visual components using commands based on relative attributes (e.g. \"make this part more dangerous\"). Novel designs are assembled in real-time as the strengths of selected attributes are varied, enabling rapid, in-situ exploration of candidate designs. We applied this approach to 3D modeling and web design. Experiments suggest this interface is an effective alternative for novices performing tasks with high-level design goals.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124645801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Applications and games","authors":"Xiaojun Bi","doi":"10.1145/3254706","DOIUrl":"https://doi.org/10.1145/3254706","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"27 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120810726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today, most popular software applications are deployed in the cloud, interact with many users, and run on multiple platforms from Web browsers to mobile operating systems. While these applications confer a number of benefits to their users, building them brings many challenges: manually managing state between asynchronous user actions, creating and maintaining separate code bases for each desired client platform and gracefully scaling to handle a large number of concurrent users. Dog is a new programming language that provides a solution to these challenges and others through a unique runtime model that allows developers to model scalable cross-client applications as an imperative control-flow --- simplifying many development tasks. In this paper we describe the key features of Dog and show its utility through several applications that are difficult and time-consuming to write in existing languages, but are simple and easily written in Dog in a few lines of code.
{"title":"The dog programming language","authors":"Salman Ahmad, S. Kamvar","doi":"10.1145/2501988.2502026","DOIUrl":"https://doi.org/10.1145/2501988.2502026","url":null,"abstract":"Today, most popular software applications are deployed in the cloud, interact with many users, and run on multiple platforms from Web browsers to mobile operating systems. While these applications confer a number of benefits to their users, building them brings many challenges: manually managing state between asynchronous user actions, creating and maintaining separate code bases for each desired client platform and gracefully scaling to handle a large number of concurrent users. Dog is a new programming language that provides a solution to these challenges and others through a unique runtime model that allows developers to model scalable cross-client applications as an imperative control-flow --- simplifying many development tasks. In this paper we describe the key features of Dog and show its utility through several applications that are difficult and time-consuming to write in existing languages, but are simple and easily written in Dog in a few lines of code.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130694040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Tangible and fabrication","authors":"Patrick Baudisch","doi":"10.1145/3254707","DOIUrl":"https://doi.org/10.1145/3254707","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130729450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Touch scrolling systems use a transfer function to transform gestures on a touch sensitive surface into scrolling output. The design of these transfer functions is complex as they must facilitate precise direct manipulation of the underlying content as well as rapid scrolling through large datasets. However, researchers' ability to refine them is impaired by: (1) limited understanding of how users express scrolling intentions through touch gestures; (2) lack of knowledge on proprietary transfer functions, causing researchers to evaluate techniques that may misrepresent the state of the art; and (3) a lack of tools for examining existing transfer functions. To address these limitations, we examine how users express scrolling intentions in a human factors experiment; we describe methods to reverse engineer existing `black box' transfer functions, including use of an accurate robotic arm; and we use the methods to expose the functions of Apple iOS and Google Android, releasing data tables and software to assist replication. We discuss how this new understanding can improve experimental rigour and assist iterative improvement of touch scrolling.
{"title":"Touch scrolling transfer functions","authors":"Philip Quinn, Sylvain Malacria, A. Cockburn","doi":"10.1145/2501988.2501995","DOIUrl":"https://doi.org/10.1145/2501988.2501995","url":null,"abstract":"Touch scrolling systems use a transfer function to transform gestures on a touch sensitive surface into scrolling output. The design of these transfer functions is complex as they must facilitate precise direct manipulation of the underlying content as well as rapid scrolling through large datasets. However, researchers' ability to refine them is impaired by: (1) limited understanding of how users express scrolling intentions through touch gestures; (2) lack of knowledge on proprietary transfer functions, causing researchers to evaluate techniques that may misrepresent the state of the art; and (3) a lack of tools for examining existing transfer functions. To address these limitations, we examine how users express scrolling intentions in a human factors experiment; we describe methods to reverse engineer existing `black box' transfer functions, including use of an accurate robotic arm; and we use the methods to expose the functions of Apple iOS and Google Android, releasing data tables and software to assist replication. We discuss how this new understanding can improve experimental rigour and assist iterative improvement of touch scrolling.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114898569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sylvain Malacria, Joey Scarr, A. Cockburn, C. Gutwin, Tovi Grossman
Applications typically provide ways for expert users to increase their performance, such as keyboard shortcuts or customization, but these facilities are frequently ignored. To help address this problem, we introduce skillometers -- lightweight displays that visualize the benefits available through practicing, adopting a better technique, or switching to a faster mode of interaction. We present a general framework for skillometer design, then discuss the design and implementation of a real-world skillometer intended to increase hotkey use. A controlled experiment shows that our skillometer successfully encourages earlier and faster learning of hotkeys. Finally, we discuss general lessons for future development and deployment of skillometers.
{"title":"Skillometers: reflective widgets that motivate and help users to improve performance","authors":"Sylvain Malacria, Joey Scarr, A. Cockburn, C. Gutwin, Tovi Grossman","doi":"10.1145/2501988.2501996","DOIUrl":"https://doi.org/10.1145/2501988.2501996","url":null,"abstract":"Applications typically provide ways for expert users to increase their performance, such as keyboard shortcuts or customization, but these facilities are frequently ignored. To help address this problem, we introduce skillometers -- lightweight displays that visualize the benefits available through practicing, adopting a better technique, or switching to a faster mode of interaction. We present a general framework for skillometer design, then discuss the design and implementation of a real-world skillometer intended to increase hotkey use. A controlled experiment shows that our skillometer successfully encourages earlier and faster learning of hotkeys. Finally, we discuss general lessons for future development and deployment of skillometers.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131862034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jack Lindsay, Iris Jiang, Eric C. Larson, R. Adams, Shwetak N. Patel, B. Hannaford
Vibrotactile devices suffer from poor energy efficiency, arising from a mismatch between the device and the impedance of the human skin. This results in over-sized actuators and excessive power consumption, and prevents development of more sophisticated, miniaturized and low-power mobile tactile devices. In this paper, we present the experimental evaluation of a vibrotactile system designed to match the impedance of the skin to the impedance of the actuator. This system is able to quadruple the motion of the skin without increasing power consumption, and produce sensations equivalent to a standard system while consuming 1/2 of the power. By greatly reducing the size and power constraints of vibrotactile actuators, this technology offers a means to realize more sophisticated, smaller haptic devices for the user interface community.
{"title":"Good vibrations: an evaluation of vibrotactile impedance matching for low power wearable applications","authors":"Jack Lindsay, Iris Jiang, Eric C. Larson, R. Adams, Shwetak N. Patel, B. Hannaford","doi":"10.1145/2501988.2502051","DOIUrl":"https://doi.org/10.1145/2501988.2502051","url":null,"abstract":"Vibrotactile devices suffer from poor energy efficiency, arising from a mismatch between the device and the impedance of the human skin. This results in over-sized actuators and excessive power consumption, and prevents development of more sophisticated, miniaturized and low-power mobile tactile devices. In this paper, we present the experimental evaluation of a vibrotactile system designed to match the impedance of the skin to the impedance of the actuator. This system is able to quadruple the motion of the skin without increasing power consumption, and produce sensations equivalent to a standard system while consuming 1/2 of the power. By greatly reducing the size and power constraints of vibrotactile actuators, this technology offers a means to realize more sophisticated, smaller haptic devices for the user interface community.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116814143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, exercise games have been criticized for not being able to engage their players into levels of physical activity that are high enough to yield health benefits. A major challenge in the design of exergames, however, is that it is difficult to assess the amount of physical activity an exergame yields due to limitations of existing techniques to assess energy expenditure of exergaming activities. With recent advances in commercial depth sensing technology to accurately track players' motions in 3D, we present a technique called Vizical that uses a non-linear regression approach to accurately predict energy expenditure in real-time. Vizical may allow for creating exergames that can report energy expenditure while playing, and whose intensity can be adjusted in real-time to stimulate larger health benefits.
{"title":"ViziCal: accurate energy expenditure prediction for playing exergames","authors":"Miran Kim, J. Angermann, G. Bebis, Eelke Folmer","doi":"10.1145/2501988.2502009","DOIUrl":"https://doi.org/10.1145/2501988.2502009","url":null,"abstract":"In recent years, exercise games have been criticized for not being able to engage their players into levels of physical activity that are high enough to yield health benefits. A major challenge in the design of exergames, however, is that it is difficult to assess the amount of physical activity an exergame yields due to limitations of existing techniques to assess energy expenditure of exergaming activities. With recent advances in commercial depth sensing technology to accurately track players' motions in 3D, we present a technique called Vizical that uses a non-linear regression approach to accurately predict energy expenditure in real-time. Vizical may allow for creating exergames that can report energy expenditure while playing, and whose intensity can be adjusted in real-time to stimulate larger health benefits.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122369479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Colaco, Ahmed Kirmani, Hye Soo Yang, Nan-Wei Gong, C. Schmandt, Vivek K Goyal
We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction. Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.
{"title":"Mime: compact, low power 3D gesture sensing for interaction with head mounted displays","authors":"Andrea Colaco, Ahmed Kirmani, Hye Soo Yang, Nan-Wei Gong, C. Schmandt, Vivek K Goyal","doi":"10.1145/2501988.2502042","DOIUrl":"https://doi.org/10.1145/2501988.2502042","url":null,"abstract":"We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction. Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125592462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hsiang-Sheng Liang, Kuan-Hung Kuo, Po-Wei Lee, Yu-Chien Chan, Yu-Chin Lin, Mike Y. Chen
Cascading Style Sheet (CSS) is a fundamental web language for describing the presentation of web pages. CSS rules are often reused across multiple parts of a page and across multiple pages throughout a site to reduce repetition and to provide a consistent look and feel. When a CSS rule is modified, developers currently have to manually track and visually inspect all possible parts of the site that may be impacted by that change. We present SeeSS, a system that automatically tracks CSS change impact across a site and enables developers to easily visualize all of them. The impacted page fragments are sorted by severity and the differences before and after the change are highlighted using animation.
{"title":"SeeSS: seeing what i broke -- visualizing change impact of cascading style sheets (css)","authors":"Hsiang-Sheng Liang, Kuan-Hung Kuo, Po-Wei Lee, Yu-Chien Chan, Yu-Chin Lin, Mike Y. Chen","doi":"10.1145/2501988.2502006","DOIUrl":"https://doi.org/10.1145/2501988.2502006","url":null,"abstract":"Cascading Style Sheet (CSS) is a fundamental web language for describing the presentation of web pages. CSS rules are often reused across multiple parts of a page and across multiple pages throughout a site to reduce repetition and to provide a consistent look and feel. When a CSS rule is modified, developers currently have to manually track and visually inspect all possible parts of the site that may be impacted by that change. We present SeeSS, a system that automatically tracks CSS change impact across a site and enables developers to easily visualize all of them. The impacted page fragments are sorted by severity and the differences before and after the change are highlighted using animation.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115431082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}