Anders Markussen, Sebastian Boring, M. R. Jakobsen, K. Hornbæk
The size of information spaces often exceeds the limits of even the largest displays. This makes navigating such spaces through on-screen interactions demanding. However, if users imagine the information space extending in a plane beyond the display's boundaries, they might be able to use the space beyond the display for input. This paper investigates Off-Limits, an interaction concept extending the input space of a large display into the space beyond the screen through the use of mid-air pointing. We develop and evaluate the concept through three empirical studies in one-dimensional space: First, we explore benefits and limitations of off-screen pointing compared to touch interaction and mid-air on-screen pointing; next, we assess users' accuracy in off-screen pointing to model the distance-to-screen vs. accuracy trade-off; and finally, we show how Off-Limits is further improved by applying that model to the naïve approach. Overall, we found that the final Off-Limits concept provides significant performance benefits over on-screen and touch pointing conditions.
{"title":"Off-Limits: Interacting Beyond the Boundaries of Large Displays","authors":"Anders Markussen, Sebastian Boring, M. R. Jakobsen, K. Hornbæk","doi":"10.1145/2858036.2858083","DOIUrl":"https://doi.org/10.1145/2858036.2858083","url":null,"abstract":"The size of information spaces often exceeds the limits of even the largest displays. This makes navigating such spaces through on-screen interactions demanding. However, if users imagine the information space extending in a plane beyond the display's boundaries, they might be able to use the space beyond the display for input. This paper investigates Off-Limits, an interaction concept extending the input space of a large display into the space beyond the screen through the use of mid-air pointing. We develop and evaluate the concept through three empirical studies in one-dimensional space: First, we explore benefits and limitations of off-screen pointing compared to touch interaction and mid-air on-screen pointing; next, we assess users' accuracy in off-screen pointing to model the distance-to-screen vs. accuracy trade-off; and finally, we show how Off-Limits is further improved by applying that model to the naïve approach. Overall, we found that the final Off-Limits concept provides significant performance benefits over on-screen and touch pointing conditions.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126315039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a method for fabricating prototypes of interactive computing devices from clay sculptures without requiring the designer to be skilled in CAD software. The method creates a "what you sculpt is what you get" process that mimics the "what you see is what you get" processes used in interface design for 2D screens. Our approach uses clay for modeling the basic shape of the device around 3D printed representations, which we call "blanks", of physical interaction widgets such as buttons, sliders, knobs and other electronics. Each blank includes 4 fiducial markers uniquely arranged on a visible surface. After scanning the sculpture, these fiducial marks allow our software to identify widget types and locations in the scanned model. The software then converts the scan into a printable prototype by positioning mounting surfaces, openings for the controls and a splitting plane for assembly. Because the blanks fit in the sculpted shape, they will reliably fit in the interactive prototype. Creating an interactive prototype requires about 30 minutes of human effort for sculpting, and after scanning, involves a single button click to use the process.
{"title":"What you Sculpt is What you Get: Modeling Physical Interactive Devices with Clay and 3D Printed Widgets","authors":"Michael D. Jones, Kevin Seppi, D. Olsen","doi":"10.1145/2858036.2858493","DOIUrl":"https://doi.org/10.1145/2858036.2858493","url":null,"abstract":"We present a method for fabricating prototypes of interactive computing devices from clay sculptures without requiring the designer to be skilled in CAD software. The method creates a \"what you sculpt is what you get\" process that mimics the \"what you see is what you get\" processes used in interface design for 2D screens. Our approach uses clay for modeling the basic shape of the device around 3D printed representations, which we call \"blanks\", of physical interaction widgets such as buttons, sliders, knobs and other electronics. Each blank includes 4 fiducial markers uniquely arranged on a visible surface. After scanning the sculpture, these fiducial marks allow our software to identify widget types and locations in the scanned model. The software then converts the scan into a printable prototype by positioning mounting surfaces, openings for the controls and a splitting plane for assembly. Because the blanks fit in the sculpted shape, they will reliably fit in the interactive prototype. Creating an interactive prototype requires about 30 minutes of human effort for sculpting, and after scanning, involves a single button click to use the process.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125713192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a project that developed a Serious Game with a Natural User Interface, via a Participatory Design approach with two adolescents with High-Functioning Autism (HFA). The project took place in a highly specialized school for young people with Special Educational Needs (SEN). The teenagers were empowered by assigning them specific roles across several sessions. They could express their voice as user, informant, designer and tester. As a result, teachers and young people developed a digital educational game based on their experience as video gamers to improve academic skills in Geography. This paper contributes by describing the sensitive and flexible approach to the design process which promoted stakeholders' participation.
{"title":"\"This is how I want to learn\": High Functioning Autistic Teens Co-Designing a Serious Game","authors":"Benoît Bossavit, S. Parsons","doi":"10.1145/2858036.2858322","DOIUrl":"https://doi.org/10.1145/2858036.2858322","url":null,"abstract":"This paper presents a project that developed a Serious Game with a Natural User Interface, via a Participatory Design approach with two adolescents with High-Functioning Autism (HFA). The project took place in a highly specialized school for young people with Special Educational Needs (SEN). The teenagers were empowered by assigning them specific roles across several sessions. They could express their voice as user, informant, designer and tester. As a result, teachers and young people developed a digital educational game based on their experience as video gamers to improve academic skills in Geography. This paper contributes by describing the sensitive and flexible approach to the design process which promoted stakeholders' participation.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115817540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Arif, Sunjun Kim, W. Stuerzlinger, Geehyuk Lee, Ali Mazalek
We present a new smart-restorable backspace technique to facilitate correction of "overlooked" errors on touchscreen-based tablets. We conducted an empirical study to compare the new backspace technique with the conventional one. Results of the study revealed that the new technique improves the overall text entry performance, both in terms of speed and operations per character, by significantly reducing error correction efforts. In addition, results showed that most users preferred the new technique to the one they use on their tablets, and found it easy to learn and use. Most of them also felt that it improved their overall text entry performance, thus wanted to keep using it.
{"title":"Evaluation of a Smart-Restorable Backspace Technique to Facilitate Text Entry Error Correction","authors":"A. Arif, Sunjun Kim, W. Stuerzlinger, Geehyuk Lee, Ali Mazalek","doi":"10.1145/2858036.2858407","DOIUrl":"https://doi.org/10.1145/2858036.2858407","url":null,"abstract":"We present a new smart-restorable backspace technique to facilitate correction of \"overlooked\" errors on touchscreen-based tablets. We conducted an empirical study to compare the new backspace technique with the conventional one. Results of the study revealed that the new technique improves the overall text entry performance, both in terms of speed and operations per character, by significantly reducing error correction efforts. In addition, results showed that most users preferred the new technique to the one they use on their tablets, and found it easy to learn and use. Most of them also felt that it improved their overall text entry performance, thus wanted to keep using it.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130011648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Voting is a glocalized event across countries, states and municipalities in which individuals of all abilities want to participate. To enable people with disabilities to participate accessible voting is typically implemented by adding assistive technologies to electronic voting machines to accommodate people with disabilities. To overcome the complexities and inequities in this practice, two interfaces, EZ Ballot, which uses a linear yes/no input system for all selections, and QUICK Ballot, which provides random access voting through direct selection, were designed to provide one system for all voters. This paper reports efficacy testing of both interfaces. The study demonstrated that voters with a range of visual abilities were able to use both ballots independently. While non-sighted voters made fewer errors on the linear ballot (EZ Ballot), partially-sighted and sighted voters completed the random access ballot (QUICK Ballot) in less time. In addition, a higher percentage of non-sighted participants preferred the linear ballot, and a higher percentage of sighted participants preferred the random ballot.
{"title":"Universal Design Ballot Interfaces on Voting Performance and Satisfaction of Voters with and without Vision Loss","authors":"S. Lee, E. Y. Liu, Ljilja Ruzic, J. Sanford","doi":"10.1145/2858036.2858567","DOIUrl":"https://doi.org/10.1145/2858036.2858567","url":null,"abstract":"Voting is a glocalized event across countries, states and municipalities in which individuals of all abilities want to participate. To enable people with disabilities to participate accessible voting is typically implemented by adding assistive technologies to electronic voting machines to accommodate people with disabilities. To overcome the complexities and inequities in this practice, two interfaces, EZ Ballot, which uses a linear yes/no input system for all selections, and QUICK Ballot, which provides random access voting through direct selection, were designed to provide one system for all voters. This paper reports efficacy testing of both interfaces. The study demonstrated that voters with a range of visual abilities were able to use both ballots independently. While non-sighted voters made fewer errors on the linear ballot (EZ Ballot), partially-sighted and sighted voters completed the random access ballot (QUICK Ballot) in less time. In addition, a higher percentage of non-sighted participants preferred the linear ballot, and a higher percentage of sighted participants preferred the random ballot.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134210718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joanne Locascio, Rushil Khurana, Yan He, Joseph Kaye
Usability testing is an everyday practice for usability professionals in corporations. But, as in all experimental situations, who you study can be as important as what you study. In this Note we explore a common practice in the corporation: experimenting on the company's employees. While fellow employees can be convenient and avoid issues such as confidentiality, we use two usability studies of mobile and web applications to show that employees spend less time-on-task on competitor websites than non-employees. Non-employees reliably rate competitor websites and apps higher than employees on both usability (on the 10-question SUS scale) and ease of use (on the 1-question SEQ scale). We conclude with recommendations for best practices for usability testing in the corporation.
{"title":"Utilizing Employees as Usability Participants: Exploring When and When Not to Leverage Your Coworkers","authors":"Joanne Locascio, Rushil Khurana, Yan He, Joseph Kaye","doi":"10.1145/2858036.2858047","DOIUrl":"https://doi.org/10.1145/2858036.2858047","url":null,"abstract":"Usability testing is an everyday practice for usability professionals in corporations. But, as in all experimental situations, who you study can be as important as what you study. In this Note we explore a common practice in the corporation: experimenting on the company's employees. While fellow employees can be convenient and avoid issues such as confidentiality, we use two usability studies of mobile and web applications to show that employees spend less time-on-task on competitor websites than non-employees. Non-employees reliably rate competitor websites and apps higher than employees on both usability (on the 10-question SUS scale) and ease of use (on the 1-question SEQ scale). We conclude with recommendations for best practices for usability testing in the corporation.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134222290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present B2B-Swipe, a single-finger swipe gesture for a rectangular smartwatch that starts at a bezel and ends at a bezel to enrich input vocabulary. There are 16 possible B2B-Swipes because a rectangular smartwatch has four bezels. Moreover, B2B-Swipe can be implemented with a single-touch screen with no additional hardware. Our study shows that B2B-Swipe can co-exist with Bezel Swipe and Flick, with an error rate of 3.7% under the sighted condition and 8.0% under the eyes-free condition. Furthermore, B2B-Swipe is potentially accurate (i.e., the error rates were 0% and 0.6% under the sighted and eyes-free conditions) if the system uses only B2B-Swipes for touch gestures.
{"title":"B2B-Swipe: Swipe Gesture for Rectangular Smartwatches from a Bezel to a Bezel","authors":"Yuki Kubo, B. Shizuki, J. Tanaka","doi":"10.1145/2858036.2858216","DOIUrl":"https://doi.org/10.1145/2858036.2858216","url":null,"abstract":"We present B2B-Swipe, a single-finger swipe gesture for a rectangular smartwatch that starts at a bezel and ends at a bezel to enrich input vocabulary. There are 16 possible B2B-Swipes because a rectangular smartwatch has four bezels. Moreover, B2B-Swipe can be implemented with a single-touch screen with no additional hardware. Our study shows that B2B-Swipe can co-exist with Bezel Swipe and Flick, with an error rate of 3.7% under the sighted condition and 8.0% under the eyes-free condition. Furthermore, B2B-Swipe is potentially accurate (i.e., the error rates were 0% and 0.6% under the sighted and eyes-free conditions) if the system uses only B2B-Swipes for touch gestures.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134416912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of trajectory-based tasks is modeled by the steering law, which predicts the required time from the index of difficulty (ID). This paper focuses on the fact that the time required to pass through a straight path with linearly-varying width alters depending on the direction of the movement. In this study, an expression for the relationship between the ID of narrowing and widening paths has been developed. This expression can be used to predict the movement time needed to pass through in the opposite direction from only a few data points, after measuring the time needed in the other direction. In the experiment, the times for five IDs were predicted with high precision from the measured time for one ID, thereby illustrating the effectiveness of the proposed method.
{"title":"Modeling the Steering Time Difference between Narrowing and Widening Tunnels","authors":"Shota Yamanaka, Homei Miyashita","doi":"10.1145/2858036.2858037","DOIUrl":"https://doi.org/10.1145/2858036.2858037","url":null,"abstract":"The performance of trajectory-based tasks is modeled by the steering law, which predicts the required time from the index of difficulty (ID). This paper focuses on the fact that the time required to pass through a straight path with linearly-varying width alters depending on the direction of the movement. In this study, an expression for the relationship between the ID of narrowing and widening paths has been developed. This expression can be used to predict the movement time needed to pass through in the opposite direction from only a few data points, after measuring the time needed in the other direction. In the experiment, the times for five IDs were predicted with high precision from the measured time for one ID, thereby illustrating the effectiveness of the proposed method.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134540322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahdi Azmandian, Mark S. Hancock, Hrvoje Benko, E. Ofek, Andrew D. Wilson
Manipulating a virtual object with appropriate passive haptic cues provides a satisfying sense of presence in virtual reality. However, scaling such experiences to support multiple virtual objects is a challenge as each one needs to be accompanied with a precisely-located haptic proxy object. We propose a solution that overcomes this limitation by hacking human perception. We have created a framework for repurposing passive haptics, called haptic retargeting, that leverages the dominance of vision when our senses conflict. With haptic retargeting, a single physical prop can provide passive haptics for multiple virtual objects. We introduce three approaches for dynamically aligning physical and virtual objects: world manipulation, body manipulation and a hybrid technique which combines both world and body manipulation. Our study results indicate that all our haptic retargeting techniques improve the sense of presence when compared to typical wand-based 3D control of virtual objects. Furthermore, our hybrid haptic retargeting achieved the highest satisfaction and presence scores while limiting the visible side-effects during interaction.
{"title":"Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences","authors":"Mahdi Azmandian, Mark S. Hancock, Hrvoje Benko, E. Ofek, Andrew D. Wilson","doi":"10.1145/2858036.2858226","DOIUrl":"https://doi.org/10.1145/2858036.2858226","url":null,"abstract":"Manipulating a virtual object with appropriate passive haptic cues provides a satisfying sense of presence in virtual reality. However, scaling such experiences to support multiple virtual objects is a challenge as each one needs to be accompanied with a precisely-located haptic proxy object. We propose a solution that overcomes this limitation by hacking human perception. We have created a framework for repurposing passive haptics, called haptic retargeting, that leverages the dominance of vision when our senses conflict. With haptic retargeting, a single physical prop can provide passive haptics for multiple virtual objects. We introduce three approaches for dynamically aligning physical and virtual objects: world manipulation, body manipulation and a hybrid technique which combines both world and body manipulation. Our study results indicate that all our haptic retargeting techniques improve the sense of presence when compared to typical wand-based 3D control of virtual objects. Furthermore, our hybrid haptic retargeting achieved the highest satisfaction and presence scores while limiting the visible side-effects during interaction.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134558791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Lindlbauer, Klemen Lilija, Robert Walter, Jörg Müller
It has been argued that transparent displays are beneficial for certain tasks by allowing users to simultaneously see on-screen content as well as the environment behind the display. However, it is yet unclear how much in background awareness users gain and if performance suffers for tasks performed on the transparent display, since users are no longer shielded from distractions. Therefore, we investigate the influence of display transparency on task performance and background awareness in a dual-task scenario. We conducted an experiment comparing transparent displays with conventional displays in different horizontal and vertical configurations. Participants performed an attention-demanding primary task on the display while simultaneously observing the background for target stimuli. Our results show that transparent and horizontal displays increase the ability of participants to observe the background while keeping primary task performance constant.
{"title":"Influence of Display Transparency on Background Awareness and Task Performance","authors":"David Lindlbauer, Klemen Lilija, Robert Walter, Jörg Müller","doi":"10.1145/2858036.2858453","DOIUrl":"https://doi.org/10.1145/2858036.2858453","url":null,"abstract":"It has been argued that transparent displays are beneficial for certain tasks by allowing users to simultaneously see on-screen content as well as the environment behind the display. However, it is yet unclear how much in background awareness users gain and if performance suffers for tasks performed on the transparent display, since users are no longer shielded from distractions. Therefore, we investigate the influence of display transparency on task performance and background awareness in a dual-task scenario. We conducted an experiment comparing transparent displays with conventional displays in different horizontal and vertical configurations. Participants performed an attention-demanding primary task on the display while simultaneously observing the background for target stimuli. Our results show that transparent and horizontal displays increase the ability of participants to observe the background while keeping primary task performance constant.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131628840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}