In this talk, I look at the current state of user-centred design from a practitioner perspective. I examine where I think the field is up to, why we still struggle to be accepted, and what we need to do to move forward and contribute to truly great design.
{"title":"User centred design in practice: is it working?","authors":"Donna Maurer","doi":"10.1145/1228175.1228178","DOIUrl":"https://doi.org/10.1145/1228175.1228178","url":null,"abstract":"In this talk, I look at the current state of user-centred design from a practitioner perspective. I examine where I think the field is up to, why we still struggle to be accepted, and what we need to do to move forward and contribute to truly great design.","PeriodicalId":164924,"journal":{"name":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127715106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most of the commercialized wearable text input devices are wrist-worn keyboards that have adopted the minimization method of reducing keys. Generally, a drastic key reduction in order to achieve sufficient wearability increases KSPC (Keystrokes per Character), decreases text entry performance, and requires additional effort to learn a new typing method. We are faced with wearability-usability tradeoff problems in designing a good wearable keyboard. To address this problem, we adopted a new keyboard minimization method of reducing key pitch and have developed the One-key Keyboard. The traditional desktop keyboard has one key per character, but One-key Keyboard has only one key (70mmX35mm) on which a 10*5 QWERTY key array is printed. One-key Keyboard detects the position of the fingertip at the time of the keying event and figures out the character entered. We conducted a text entry performance test comprised of 5 sessions. The participants typed 18.9WPM with a 6.7% error rate over all sessions and achieved up to 24.5WPM. From the experiment's results, the One-key Keyboard was evaluated as a potential text input device for wearable computing, balancing wearability, social acceptance, input speed, and learnability.
{"title":"One-key keyboard: a very small QWERTY keyboard supporting text entry for wearable computing","authors":"Seoktae Kim, Minjung Sohn, Jinhee Pak, Woohun Lee","doi":"10.1145/1228175.1228229","DOIUrl":"https://doi.org/10.1145/1228175.1228229","url":null,"abstract":"Most of the commercialized wearable text input devices are wrist-worn keyboards that have adopted the minimization method of reducing keys. Generally, a drastic key reduction in order to achieve sufficient wearability increases KSPC (Keystrokes per Character), decreases text entry performance, and requires additional effort to learn a new typing method. We are faced with wearability-usability tradeoff problems in designing a good wearable keyboard. To address this problem, we adopted a new keyboard minimization method of reducing key pitch and have developed the One-key Keyboard. The traditional desktop keyboard has one key per character, but One-key Keyboard has only one key (70mmX35mm) on which a 10*5 QWERTY key array is printed. One-key Keyboard detects the position of the fingertip at the time of the keying event and figures out the character entered. We conducted a text entry performance test comprised of 5 sessions. The participants typed 18.9WPM with a 6.7% error rate over all sessions and achieved up to 24.5WPM. From the experiment's results, the One-key Keyboard was evaluated as a potential text input device for wearable computing, balancing wearability, social acceptance, input speed, and learnability.","PeriodicalId":164924,"journal":{"name":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130394957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recording of errors in regards to the usability of systems has traditionally focused on safety-critical systems and business support systems. This study applies Zapf et al.'s 'Taxonomy of Errors' to a non-work related context, an Online Grocery System. The taxonomy was found to show that similar types of errors were made by all users of such systems. However, the number of errors that were recorded by different user groups varied. This finding was in contrast to previous studies, and supported the common perception that beginner users make a greater number of errors than more experienced users.
{"title":"Usability of online grocery systems: a focus on errors","authors":"M. Freeman, Alison Norris, P. Hyland","doi":"10.1145/1228175.1228222","DOIUrl":"https://doi.org/10.1145/1228175.1228222","url":null,"abstract":"Recording of errors in regards to the usability of systems has traditionally focused on safety-critical systems and business support systems. This study applies Zapf et al.'s 'Taxonomy of Errors' to a non-work related context, an Online Grocery System. The taxonomy was found to show that similar types of errors were made by all users of such systems. However, the number of errors that were recorded by different user groups varied. This finding was in contrast to previous studies, and supported the common perception that beginner users make a greater number of errors than more experienced users.","PeriodicalId":164924,"journal":{"name":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133525044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Groth, Sinna Lindquist, Cristian Bogdan, Tobias Lidskog, Y. Sundblad, O. Sandor
For people working in situations with few colleagues around, information technology could be used for increased communication with colleagues at other places. One such group is teachers in rural areas. In our work with teachers in an archipelago school distributed over six islands we have focused on encouraging communication using a digital notice board, providing for quick handwritten notes, connecting all islands. Based on the teachers' collaborative situation, and on the design, implementation and use of the prototype, we illustrate, by a number of recorded notes, how the teachers have been using the prototype, relating the findings to group building, easy access, and playful behaviour.
{"title":"Saxaren: strengthening informal collaboration among geographically distributed teachers","authors":"K. Groth, Sinna Lindquist, Cristian Bogdan, Tobias Lidskog, Y. Sundblad, O. Sandor","doi":"10.1145/1228175.1228224","DOIUrl":"https://doi.org/10.1145/1228175.1228224","url":null,"abstract":"For people working in situations with few colleagues around, information technology could be used for increased communication with colleagues at other places. One such group is teachers in rural areas. In our work with teachers in an archipelago school distributed over six islands we have focused on encouraging communication using a digital notice board, providing for quick handwritten notes, connecting all islands. Based on the teachers' collaborative situation, and on the design, implementation and use of the prototype, we illustrate, by a number of recorded notes, how the teachers have been using the prototype, relating the findings to group building, easy access, and playful behaviour.","PeriodicalId":164924,"journal":{"name":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129431641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wally Smith, Daghan L. Acay, Ramon Fano, G. Ratner
This paper describes two prototype tools developed as part of a design-based investigation into the use of multiple-perspective scenarios. A multiple-perspective scenario is one constructed as many different narratives about the same events, with the intention being to explore how the different perspectives might be coordinated or might reach some accommodation. The first prototype assists an author to construct such a scenario, while the second prototype allows the scenario to be delivered to physically distributed groups who communicate with each other using video-conferencing. This exploratory investigation demonstrates how the scenario must be represented with extensive additional data and meta-data to render the authors' intentions visibile and meaningful. In this way, the scenario can be re-played in a way that allows for re-discovery of the issues contained within.
{"title":"Tools for designing and delivering multiple-perspective scenarios","authors":"Wally Smith, Daghan L. Acay, Ramon Fano, G. Ratner","doi":"10.1145/1228175.1228207","DOIUrl":"https://doi.org/10.1145/1228175.1228207","url":null,"abstract":"This paper describes two prototype tools developed as part of a design-based investigation into the use of multiple-perspective scenarios. A multiple-perspective scenario is one constructed as many different narratives about the same events, with the intention being to explore how the different perspectives might be coordinated or might reach some accommodation. The first prototype assists an author to construct such a scenario, while the second prototype allows the scenario to be delivered to physically distributed groups who communicate with each other using video-conferencing. This exploratory investigation demonstrates how the scenario must be represented with extensive additional data and meta-data to render the authors' intentions visibile and meaningful. In this way, the scenario can be re-played in a way that allows for re-discovery of the issues contained within.","PeriodicalId":164924,"journal":{"name":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133576918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Virtual Critical Care Unit, (ViCCU®) is a telemedicine system that allows a specialist at a major referral hospital to direct a team in a rural hospital. ViCC® allows remote consultation to take place based on the transmission of multiple channels of real-time video/audio information of the patient, the clinical team, x-ray/paper documents and patient vital signs from the remote site to the specialist. This paper explores clinicians' experience of presence in a telemedicine application. In this study we used a modified version of the Slater-Usoh-Steed (SUS) presence questionnaire to measure clinicians' sense of presence when using ViCC®. We also explored the relationship between presence felt when using ViCCU® and personal, usability and media factors. Initial results indicate that in this context, personal factors influenced clinicians experience of presence and that there was a positive relationship between presence and both usability and media factors. Reflection on some of the challenges in conducting this study in an emergency department and the appropriateness of the SUS presence measure in this real setting are also included.
{"title":"Evaluating clinicians' experience in a telemedicine application: a presence perspective","authors":"L. Alem, S. Hansen, Jane Li","doi":"10.1145/1228175.1228187","DOIUrl":"https://doi.org/10.1145/1228175.1228187","url":null,"abstract":"The Virtual Critical Care Unit, (ViCCU®) is a telemedicine system that allows a specialist at a major referral hospital to direct a team in a rural hospital. ViCC® allows remote consultation to take place based on the transmission of multiple channels of real-time video/audio information of the patient, the clinical team, x-ray/paper documents and patient vital signs from the remote site to the specialist. This paper explores clinicians' experience of presence in a telemedicine application. In this study we used a modified version of the Slater-Usoh-Steed (SUS) presence questionnaire to measure clinicians' sense of presence when using ViCC®. We also explored the relationship between presence felt when using ViCCU® and personal, usability and media factors. Initial results indicate that in this context, personal factors influenced clinicians experience of presence and that there was a positive relationship between presence and both usability and media factors. Reflection on some of the challenges in conducting this study in an emergency department and the appropriateness of the SUS presence measure in this real setting are also included.","PeriodicalId":164924,"journal":{"name":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129363707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Hampshire, H. Seichter, R. Grasset, M. Billinghurst
Developing an Augmented Reality (AR) application is usually a long and non-intuitive task. Few methodologies address this problem and tools implementing these are limited or non-existent. To date there is no efficient and easy development tool tailored to the needs of Mixed Reality (MR). We are presenting an initial taxonomy of MR applications, addressing the different levels of abstraction for defining the relation between real and virtual world. We then demonstrate some development approaches and describe tools and libraries that we implemented in order to illustrate aspects of our authoring taxonomy. Finally, we provide a definition addressing the requirements for new generation of AR rapid application development (RAD) tools based on actual implementations.
{"title":"Augmented reality authoring: generic context from programmer to designer","authors":"A. Hampshire, H. Seichter, R. Grasset, M. Billinghurst","doi":"10.1145/1228175.1228259","DOIUrl":"https://doi.org/10.1145/1228175.1228259","url":null,"abstract":"Developing an Augmented Reality (AR) application is usually a long and non-intuitive task. Few methodologies address this problem and tools implementing these are limited or non-existent. To date there is no efficient and easy development tool tailored to the needs of Mixed Reality (MR). We are presenting an initial taxonomy of MR applications, addressing the different levels of abstraction for defining the relation between real and virtual world. We then demonstrate some development approaches and describe tools and libraries that we implemented in order to illustrate aspects of our authoring taxonomy. Finally, we provide a definition addressing the requirements for new generation of AR rapid application development (RAD) tools based on actual implementations.","PeriodicalId":164924,"journal":{"name":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127815510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper evaluates a demonstration of in-vivo (live) surgery over a broadband Internet connection between the USA and Australia. Two specific targets of the evaluation are the use of remote 3D video display of the laparoscopic surgery and the structure of the demonstration to replicate actual surgical training in the operating room. The evaluation materials include preliminary design and preparation records, recordings of the two-way video data and of the surgical video, exit questionnaires, debriefing discussion notes and follow-up interviews with the participants. Prior work in this area is surveyed and the demonstration is positioned with respect to this work. Conclusions are drawn about the effectiveness of the two key aspects of the demonstration and about possibilities for future work.
{"title":"Evaluating an in-vivo surgical training demonstration over broadband internet","authors":"D. Stevenson","doi":"10.1145/1228175.1228186","DOIUrl":"https://doi.org/10.1145/1228175.1228186","url":null,"abstract":"This paper evaluates a demonstration of in-vivo (live) surgery over a broadband Internet connection between the USA and Australia. Two specific targets of the evaluation are the use of remote 3D video display of the laparoscopic surgery and the structure of the demonstration to replicate actual surgical training in the operating room. The evaluation materials include preliminary design and preparation records, recordings of the two-way video data and of the surgical video, exit questionnaires, debriefing discussion notes and follow-up interviews with the participants. Prior work in this area is surveyed and the demonstration is positioned with respect to this work. Conclusions are drawn about the effectiveness of the two key aspects of the demonstration and about possibilities for future work.","PeriodicalId":164924,"journal":{"name":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130872405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Sodnik, S. Tomažič, R. Grasset, Andreas Dünser, M. Billinghurst
Augmented Reality (AR), the overlay of virtual images onto the real world, is an increasingly popular technique for developing new human-computer interfaces. As human navigation and orientation in different environments depend on both visual and auditory information, sound plays a very important role in AR applications. In this paper we explore users' capability to localize a spatial sound (registered with a virtual object) in an AR environment, under different spatial configurations of the virtual scene. The results not only confirm several previous findings on sound localization, but also point out some important new visual-audio cues which should be taken into consideration for effective localization and orientation in AR environment. Finally, this paper provides tentative guidelines for adding spatial sound to AR environments.
{"title":"Spatial sound localization in an augmented reality environment","authors":"J. Sodnik, S. Tomažič, R. Grasset, Andreas Dünser, M. Billinghurst","doi":"10.1145/1228175.1228197","DOIUrl":"https://doi.org/10.1145/1228175.1228197","url":null,"abstract":"Augmented Reality (AR), the overlay of virtual images onto the real world, is an increasingly popular technique for developing new human-computer interfaces. As human navigation and orientation in different environments depend on both visual and auditory information, sound plays a very important role in AR applications. In this paper we explore users' capability to localize a spatial sound (registered with a virtual object) in an AR environment, under different spatial configurations of the virtual scene. The results not only confirm several previous findings on sound localization, but also point out some important new visual-audio cues which should be taken into consideration for effective localization and orientation in AR environment. Finally, this paper provides tentative guidelines for adding spatial sound to AR environments.","PeriodicalId":164924,"journal":{"name":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131394684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Millions of people post personal information on the internet, yet the actual information varies greatly. Some pieces are extremely brief, others are highly detailed. Some focus on the moment to moment changes of one's state and thoughts, others describe stable and long-lasting traits. To handle this diversity, we created Transient Life: a system that lets a person gather personal 'transient' information tidbits on the fly and share this collected information with others. Transient Life is designed as a modular sidebar located on the display's periphery. A person uses its modules to: update momentary personal state (feelings, location, happenings, and thoughts), record activity milestones done over the day as well as a 'to do' list of things left to do, collect interesting URLs and photos seen, and compose text essays of whatever has captured their interest. A person can selectively post this information as a 'today message' to one's community, and the essay to one's personal blog. Information is kept in a History Calendar, which allows one to view the information recorded on a past date.
{"title":"Transient life: collecting and sharing personal information","authors":"S. Smale, S. Greenberg","doi":"10.1145/1228175.1228184","DOIUrl":"https://doi.org/10.1145/1228175.1228184","url":null,"abstract":"Millions of people post personal information on the internet, yet the actual information varies greatly. Some pieces are extremely brief, others are highly detailed. Some focus on the moment to moment changes of one's state and thoughts, others describe stable and long-lasting traits. To handle this diversity, we created Transient Life: a system that lets a person gather personal 'transient' information tidbits on the fly and share this collected information with others. Transient Life is designed as a modular sidebar located on the display's periphery. A person uses its modules to: update momentary personal state (feelings, location, happenings, and thoughts), record activity milestones done over the day as well as a 'to do' list of things left to do, collect interesting URLs and photos seen, and compose text essays of whatever has captured their interest. A person can selectively post this information as a 'today message' to one's community, and the essay to one's personal blog. Information is kept in a History Calendar, which allows one to view the information recorded on a past date.","PeriodicalId":164924,"journal":{"name":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114511239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}