Sune Alstrup Johansen, Javier San Agustin, H. Skovsgaard, J. P. Hansen, M. Tall
Accuracy of an open source remote eye tracking system and a state-of-the-art commercial eye tracker was measured 4 times during a usability test. Results from 9 participants showed both devices to be fairly stable over time, but the commercial tracker was more accurate with a mean error of 31 pixels against 59 pixels using the low cost system. This suggests that low cost eye tracking can become a viable alternative, when usability studies need not to distinguish between, for instance, particular words or menu items that participants are looking at, but only between larger areas-of-interest they pay attention to.
{"title":"Low cost vs. high-end eye tracking for usability testing","authors":"Sune Alstrup Johansen, Javier San Agustin, H. Skovsgaard, J. P. Hansen, M. Tall","doi":"10.1145/1979742.1979744","DOIUrl":"https://doi.org/10.1145/1979742.1979744","url":null,"abstract":"Accuracy of an open source remote eye tracking system and a state-of-the-art commercial eye tracker was measured 4 times during a usability test. Results from 9 participants showed both devices to be fairly stable over time, but the commercial tracker was more accurate with a mean error of 31 pixels against 59 pixels using the low cost system. This suggests that low cost eye tracking can become a viable alternative, when usability studies need not to distinguish between, for instance, particular words or menu items that participants are looking at, but only between larger areas-of-interest they pay attention to.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114720026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes design explorations for stress mitigation on mobile devices based on three types of interventions: haptic feedback, games and social networks. The paper offers a qualitative assessment of the usability of these three types of interventions together with an initial analysis of their potential efficacy. Social networking and games show great potential for stress relief. Lastly, the paper discusses key findings and considerations for long-term studies of stress mitigation in HCI, as well as a list of aspects to be considered when designing calming interventions.
{"title":"CalmMeNow: exploratory research and design of stress mitigating mobile interventions","authors":"P. Paredes, Matthew K. Chan","doi":"10.1145/1979742.1979831","DOIUrl":"https://doi.org/10.1145/1979742.1979831","url":null,"abstract":"This paper describes design explorations for stress mitigation on mobile devices based on three types of interventions: haptic feedback, games and social networks. The paper offers a qualitative assessment of the usability of these three types of interventions together with an initial analysis of their potential efficacy. Social networking and games show great potential for stress relief. Lastly, the paper discusses key findings and considerations for long-term studies of stress mitigation in HCI, as well as a list of aspects to be considered when designing calming interventions.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"782 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117347674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We developed a floating avatar system that integrates a blimp with a virtual avatar to create a unique telepresence system. Our blimp works as an avatar and contains several pieces of equipment, including a projector and a speaker as the output functions. Users can communicate with others by transmitting their facial image through the projector and voice through the speaker. A camera and microphone attached to the blimp provide the input function and support the user's manipulation from a distance. The user's presence is dramatically enhanced compared to using conventional virtual avatars (e.g., CG and images) because the avatar is a physical object that can move freely in the real world. In addition, the user's senses are augmented because the blimp detects dynamic information in the real world. For example, the camera provides the user with a special floating view, and the microphone catches a wide variety of sounds such as conversations and environmental noises. This paper describes our floating avatar concept and its implementation.
{"title":"Floating avatar: telepresence system using blimps for communication and entertainment","authors":"Hiroaki Tobita, Shigeaki Maruyama, Takuya Kuzi","doi":"10.1145/1979742.1979625","DOIUrl":"https://doi.org/10.1145/1979742.1979625","url":null,"abstract":"We developed a floating avatar system that integrates a blimp with a virtual avatar to create a unique telepresence system. Our blimp works as an avatar and contains several pieces of equipment, including a projector and a speaker as the output functions. Users can communicate with others by transmitting their facial image through the projector and voice through the speaker. A camera and microphone attached to the blimp provide the input function and support the user's manipulation from a distance. The user's presence is dramatically enhanced compared to using conventional virtual avatars (e.g., CG and images) because the avatar is a physical object that can move freely in the real world. In addition, the user's senses are augmented because the blimp detects dynamic information in the real world. For example, the camera provides the user with a special floating view, and the microphone catches a wide variety of sounds such as conversations and environmental noises. This paper describes our floating avatar concept and its implementation.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116199444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: alt.chi: emotions, ethics, and civics","authors":"Daniel J. Wigdor","doi":"10.1145/3249075","DOIUrl":"https://doi.org/10.1145/3249075","url":null,"abstract":"","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"376 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116473016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While a majority of adults in industrialized countries do not exercise frequently enough to sustain physical health, games with an exertive interface - exergames - have been proposed as vehicles to increase activity levels. After a brief discussion of my background, I report on fundamental findings from studies conducted by interaction designers, social and computer scientists, and medical professionals whose work has responded to the crisis in physical activity levels. I give an overview of my proposed mixed methods research design, and discuss how I can both contribute and learn from approaches that can successfully support strong study findings.
{"title":"Physical activity with digital companions","authors":"L. Boschman","doi":"10.1145/1979742.1979684","DOIUrl":"https://doi.org/10.1145/1979742.1979684","url":null,"abstract":"While a majority of adults in industrialized countries do not exercise frequently enough to sustain physical health, games with an exertive interface - exergames - have been proposed as vehicles to increase activity levels. After a brief discussion of my background, I report on fundamental findings from studies conducted by interaction designers, social and computer scientists, and medical professionals whose work has responded to the crisis in physical activity levels. I give an overview of my proposed mixed methods research design, and discuss how I can both contribute and learn from approaches that can successfully support strong study findings.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123665494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This video shows the live-painting of snow sculptures with dabs of digital paint, deploying a mobile phone (virtual spray can) with accelerometer, a PC and a video projector - creating 100% recyclable art. The technology used is called MobiSpray, which has been reported by the author at SIGGRAPH 2009 in the Art papers track. Using a mobile phone in this context allows the painter to roam freely (walk, stand, lie) around the target object, far or near in real physical space, while looking directly at its surface to see how the painting appears in real time. The phone's keyboard keys are used for controlling the drawing tools such as spraying colors or spraying intensity.
{"title":"Interactive snow sculpture painting","authors":"J. Scheible","doi":"10.1145/1979742.1979555","DOIUrl":"https://doi.org/10.1145/1979742.1979555","url":null,"abstract":"This video shows the live-painting of snow sculptures with dabs of digital paint, deploying a mobile phone (virtual spray can) with accelerometer, a PC and a video projector - creating 100% recyclable art. The technology used is called MobiSpray, which has been reported by the author at SIGGRAPH 2009 in the Art papers track. Using a mobile phone in this context allows the painter to roam freely (walk, stand, lie) around the target object, far or near in real physical space, while looking directly at its surface to see how the painting appears in real time. The phone's keyboard keys are used for controlling the drawing tools such as spraying colors or spraying intensity.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121898614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Syavash Nobarany, M. Haraty, S. Fels, Brian D. Fisher
Discussions during lecture can clarify lecture points for audience members and help them deepen their understanding. However, the fast-pace of lectures and the large number of attendees can make these discussions impossible. Although digital backchannels have been used to address this problem, they present some drawbacks such as increasing distractions and not providing valuable information. We suggest incorporating audience members' levels of trust in the knowledge of other members into the design of backchannel communication systems. Based on this approach, we present methods and design considerations to overcome the aforementioned drawbacks of the previous backchannel communication systems.
{"title":"Leveraging trust relationships in digital backchannel communications","authors":"Syavash Nobarany, M. Haraty, S. Fels, Brian D. Fisher","doi":"10.1145/1979742.1979811","DOIUrl":"https://doi.org/10.1145/1979742.1979811","url":null,"abstract":"Discussions during lecture can clarify lecture points for audience members and help them deepen their understanding. However, the fast-pace of lectures and the large number of attendees can make these discussions impossible. Although digital backchannels have been used to address this problem, they present some drawbacks such as increasing distractions and not providing valuable information. We suggest incorporating audience members' levels of trust in the knowledge of other members into the design of backchannel communication systems. Based on this approach, we present methods and design considerations to overcome the aforementioned drawbacks of the previous backchannel communication systems.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117183677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bob Pritchard, S. Fels, N. D'Alessandro, M. Witvoet, Johnty Wang, C. Hassall, Helene Day-Fraser, Meryn Cadell
What Does A Body Know? is a concert work for Digital Ventriloquized Actor (DiVA) and sound clips. A DiVA is a real time gesture-controlled formant-based speech synthesizer using a Cyberglove®, touchglove, and Polhemus Tracker® as the main interfaces. When used in conjunction with the performer's own voice solos and "duets" can be performed in real time.
{"title":"Performance: what does a body know","authors":"Bob Pritchard, S. Fels, N. D'Alessandro, M. Witvoet, Johnty Wang, C. Hassall, Helene Day-Fraser, Meryn Cadell","doi":"10.1145/1979742.1979547","DOIUrl":"https://doi.org/10.1145/1979742.1979547","url":null,"abstract":"What Does A Body Know? is a concert work for Digital Ventriloquized Actor (DiVA) and sound clips. A DiVA is a real time gesture-controlled formant-based speech synthesizer using a Cyberglove®, touchglove, and Polhemus Tracker® as the main interfaces. When used in conjunction with the performer's own voice solos and \"duets\" can be performed in real time.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123953361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rubaiat Habib Kazi, Kien Chuan Chua, Shengdong Zhao, Richard C. Davis, Kok-Lim Low
Sand animation is a performance art technique in which an artist tells stories by creating animated images with sand. Inspired by this medium, we have developed a new multi-touch digital artistic medium named SandCanvas that simplifies the creation of sand animations. The elegance of sand animation lies in the seamless flow of expressive hand gestures that cause images to fluidly evolve, surprising and delighting audiences. While physical sand animation already possesses these properties, SandCanvas enhances them. SandCanvas's color and texture features enable faster, more dramatic transitions, while its mixed media and gesture recording features make it possible to create entirely new experiences. Session recording and frame capture complement these capabilities by simplifying post-production of sand animation performances.
{"title":"SandCanvas: new possibilities in sand animation","authors":"Rubaiat Habib Kazi, Kien Chuan Chua, Shengdong Zhao, Richard C. Davis, Kok-Lim Low","doi":"10.1145/1979742.1979562","DOIUrl":"https://doi.org/10.1145/1979742.1979562","url":null,"abstract":"Sand animation is a performance art technique in which an artist tells stories by creating animated images with sand. Inspired by this medium, we have developed a new multi-touch digital artistic medium named SandCanvas that simplifies the creation of sand animations. The elegance of sand animation lies in the seamless flow of expressive hand gestures that cause images to fluidly evolve, surprising and delighting audiences. While physical sand animation already possesses these properties, SandCanvas enhances them. SandCanvas's color and texture features enable faster, more dramatic transitions, while its mixed media and gesture recording features make it possible to create entirely new experiences. Session recording and frame capture complement these capabilities by simplifying post-production of sand animation performances.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124035575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}