Roshanak Zilouchian Moghaddam, M. Twidale, Kora A. Bongen
A study of the Drupal open source project shows the problematic status of usability designers with respect to the larger developer community. Issues of power, trust, and identity arise and affect the way that usability recommendations are acted on or ignored. Making a straightforward case for a particular interface design can be insufficient to convince developers. Instead various additional lobbying strategies may be employed to build up support for the design.
{"title":"Open source interface politics: identity, acceptance, trust, and lobbying","authors":"Roshanak Zilouchian Moghaddam, M. Twidale, Kora A. Bongen","doi":"10.1145/1979742.1979835","DOIUrl":"https://doi.org/10.1145/1979742.1979835","url":null,"abstract":"A study of the Drupal open source project shows the problematic status of usability designers with respect to the larger developer community. Issues of power, trust, and identity arise and affect the way that usability recommendations are acted on or ignored. Making a straightforward case for a particular interface design can be insufficient to convince developers. Instead various additional lobbying strategies may be employed to build up support for the design.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114825065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bob Pritchard, S. Fels, N. D'Alessandro, M. Witvoet, Johnty Wang, C. Hassall, Helene Day-Fraser, Meryn Cadell
What Does A Body Know? is a concert work for Digital Ventriloquized Actor (DiVA) and sound clips. A DiVA is a real time gesture-controlled formant-based speech synthesizer using a Cyberglove®, touchglove, and Polhemus Tracker® as the main interfaces. When used in conjunction with the performer's own voice solos and "duets" can be performed in real time.
{"title":"Performance: what does a body know?","authors":"Bob Pritchard, S. Fels, N. D'Alessandro, M. Witvoet, Johnty Wang, C. Hassall, Helene Day-Fraser, Meryn Cadell","doi":"10.1145/1979742.1979724","DOIUrl":"https://doi.org/10.1145/1979742.1979724","url":null,"abstract":"What Does A Body Know? is a concert work for Digital Ventriloquized Actor (DiVA) and sound clips. A DiVA is a real time gesture-controlled formant-based speech synthesizer using a Cyberglove®, touchglove, and Polhemus Tracker® as the main interfaces. When used in conjunction with the performer's own voice solos and \"duets\" can be performed in real time.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124302725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although trust and affective experiences have been linked in HCI research, a connection between traditional trust research for automation and experience design has not be made. This paper aims to start this discussion by showing the connection between experience-oriented HCI design and trust in automation through an experimental study of the Lega, a companion device for enriching experiences in museums. An experience-oriented HCI design approach was used to create this device and although it is not traditional automation, this study presents the links found between this approach and the bases of trust in automation, performance, process, and purpose, with regards to experience qualities of transparency, ambiguity, and usefulness, respectively.
{"title":"Trusting experience oriented design","authors":"A. O'Kane","doi":"10.1145/1979742.1979517","DOIUrl":"https://doi.org/10.1145/1979742.1979517","url":null,"abstract":"Although trust and affective experiences have been linked in HCI research, a connection between traditional trust research for automation and experience design has not be made. This paper aims to start this discussion by showing the connection between experience-oriented HCI design and trust in automation through an experimental study of the Lega, a companion device for enriching experiences in museums. An experience-oriented HCI design approach was used to create this device and although it is not traditional automation, this study presents the links found between this approach and the bases of trust in automation, performance, process, and purpose, with regards to experience qualities of transparency, ambiguity, and usefulness, respectively.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124314473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Bernhaupt, G. Boy, M. Feary, Philippe A. Palanque
This SIG focuses on the engineering of automation in interactive critical systems. Automation has already been studied in a number of (sub-) disciplines and application fields: design, human factors, psychology, (software) engineering, aviation, health care, games. One distinguishing feature of the area we are focusing on is that in the field of interactive critical systems properties such as reliability, dependability and fault-tolerance are as important as usability or user experience. The SIG targets at two problem areas: first the engineering of the user interaction with (partly-) autonomous systems: how to design, build and assess autonomous behavior, especially in cases where there is a need to represent on the user interface both autonomous and interactive objects. An example of such integration is the representation of an unmanned aerial vehicle (UAV) (where no direct interaction is possible), together with aircrafts (that have to be instructed by an air traffic controller to avoid the UAV). Second the design and engineering of user interaction in general for autonomous objectssystems (for example a cruise control in a car or an autopilot in an aircraft). The goal of the SIG is to raise interest in the CHI community on these aspects and to identify a community of researchers and practitioners interested in those more and more prominent issues of interfaces for interactive critical systems. The expected audience should be interested in addressing the issues of integration of mainly unconnected research domains to formulate a new joint research agenda.
{"title":"Engineering automation in interactive critical systems","authors":"R. Bernhaupt, G. Boy, M. Feary, Philippe A. Palanque","doi":"10.1145/1979742.1979524","DOIUrl":"https://doi.org/10.1145/1979742.1979524","url":null,"abstract":"This SIG focuses on the engineering of automation in interactive critical systems. Automation has already been studied in a number of (sub-) disciplines and application fields: design, human factors, psychology, (software) engineering, aviation, health care, games. One distinguishing feature of the area we are focusing on is that in the field of interactive critical systems properties such as reliability, dependability and fault-tolerance are as important as usability or user experience. The SIG targets at two problem areas: first the engineering of the user interaction with (partly-) autonomous systems: how to design, build and assess autonomous behavior, especially in cases where there is a need to represent on the user interface both autonomous and interactive objects. An example of such integration is the representation of an unmanned aerial vehicle (UAV) (where no direct interaction is possible), together with aircrafts (that have to be instructed by an air traffic controller to avoid the UAV). Second the design and engineering of user interaction in general for autonomous objectssystems (for example a cruise control in a car or an autopilot in an aircraft). The goal of the SIG is to raise interest in the CHI community on these aspects and to identify a community of researchers and practitioners interested in those more and more prominent issues of interfaces for interactive critical systems. The expected audience should be interested in addressing the issues of integration of mainly unconnected research domains to formulate a new joint research agenda.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126465802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Does screen-size matter in mobile devices? There appears to be a move toward larger screens, with recent launches of Apple's iPad and Samsung's Galaxy Tab, but do these devices undercut the perceived mobility and affect user attitudes toward the technology? To answer these and related questions, the present study examines the effects of screen-size and communication modality (text vs. video) on mobile device users' perception of mobility and content as well as attitudes toward technology acceptance. Preliminary data from a between-subjects experiment show that smaller screen-size elicited greater perceived mobility while larger screen-size was key to greater enjoyment. News story in video format played a crucial role in providing greater enjoyment and newsworthiness of the news story while news in text format was perceived to be easier to use on a mobile device. Design implications and limitations are discussed, as we prepare for a constructive replication.
{"title":"The effects of screen-size and communication modality on psychology of mobile device users","authors":"Ki Joon Kim, S. Shyam Sundar, Eunil Park","doi":"10.1145/1979742.1979749","DOIUrl":"https://doi.org/10.1145/1979742.1979749","url":null,"abstract":"Does screen-size matter in mobile devices? There appears to be a move toward larger screens, with recent launches of Apple's iPad and Samsung's Galaxy Tab, but do these devices undercut the perceived mobility and affect user attitudes toward the technology? To answer these and related questions, the present study examines the effects of screen-size and communication modality (text vs. video) on mobile device users' perception of mobility and content as well as attitudes toward technology acceptance. Preliminary data from a between-subjects experiment show that smaller screen-size elicited greater perceived mobility while larger screen-size was key to greater enjoyment. News story in video format played a crucial role in providing greater enjoyment and newsworthiness of the news story while news in text format was perceived to be easier to use on a mobile device. Design implications and limitations are discussed, as we prepare for a constructive replication.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125606684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe an interaction design process and the challenges encountered during the development of LoOkie, a social mobile application, which enables members to request and receive live videos or pictures of desired locations from people who are present at the scene. The paper describes, from a human-computer interaction perspective, the development of the application from the birth of the idea through the design process encountered up to the point of the launch of the application for Beta at the beginning of 2011.
{"title":"LoOkie - it feels like being there","authors":"Talya Porat, Inbal Rief, Rami Puzis, Y. Elovici","doi":"10.1145/1979742.1979884","DOIUrl":"https://doi.org/10.1145/1979742.1979884","url":null,"abstract":"In this paper, we describe an interaction design process and the challenges encountered during the development of LoOkie, a social mobile application, which enables members to request and receive live videos or pictures of desired locations from people who are present at the scene. The paper describes, from a human-computer interaction perspective, the development of the application from the birth of the idea through the design process encountered up to the point of the launch of the application for Beta at the beginning of 2011.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130244384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaowen Bardzell, E. Churchill, Jeffrey Bardzell, J. Forlizzi, Rebecca E. Grinter, D. Tatar
This workshop is aimed at exploring the issues at the intersection of feminist thinking and human computer interaction. Both feminism and HCI have made important contributions to social science in the past several decades, but though their potential for overlap seem high, they have not engaged each other directly until recently. In this workshop we will explore diverse--and contentious--ways that feminist perspectives can support user research, design ideation and problem framing, sketching and prototyping, and design criticism and evaluation. The workshop will include fast-moving mini-panels and hands-on group exercises emphasizing feminist interaction criticism and design ideation.
{"title":"Feminism and interaction design","authors":"Shaowen Bardzell, E. Churchill, Jeffrey Bardzell, J. Forlizzi, Rebecca E. Grinter, D. Tatar","doi":"10.1145/1979742.1979587","DOIUrl":"https://doi.org/10.1145/1979742.1979587","url":null,"abstract":"This workshop is aimed at exploring the issues at the intersection of feminist thinking and human computer interaction. Both feminism and HCI have made important contributions to social science in the past several decades, but though their potential for overlap seem high, they have not engaged each other directly until recently. In this workshop we will explore diverse--and contentious--ways that feminist perspectives can support user research, design ideation and problem framing, sketching and prototyping, and design criticism and evaluation. The workshop will include fast-moving mini-panels and hands-on group exercises emphasizing feminist interaction criticism and design ideation.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130438141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper focuses on capturing, correlating and visualizing the execution of meetings from the recorded data using a business process management approach. Relevant artifacts that are utilized or generated during a meeting as well as meeting activities are mapped onto a generic meeting data model. The execution of a meeting is then captured as a graph where generated meeting artifacts, participants and meeting tasks are connected. The graph enables faster and structured access to meeting data and gives better insight to the users about the meeting through visualization capability.
{"title":"Visualizing meetings as a graph for more accessible meeting artifacts","authors":"Y. Doganata, Mercan Topkara","doi":"10.1145/1979742.1979850","DOIUrl":"https://doi.org/10.1145/1979742.1979850","url":null,"abstract":"This paper focuses on capturing, correlating and visualizing the execution of meetings from the recorded data using a business process management approach. Relevant artifacts that are utilized or generated during a meeting as well as meeting activities are mapped onto a generic meeting data model. The execution of a meeting is then captured as a graph where generated meeting artifacts, participants and meeting tasks are connected. The graph enables faster and structured access to meeting data and gives better insight to the users about the meeting through visualization capability.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134040408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present evidence from a pilot study that children may have started to expect the drag-and-drop interaction style. This is in contrast with probably the most cited paper on this topic from 2001, stating that point-and-click is the most appropriate interaction style for children between 6 and 12 years old. Instead of providing children with information on the interaction style expected we developed two point-and-click interfaces and let children explore those interfaces themselves. Children consistently tried to apply the drag-and-drop interaction style both initially and after having discovered the point-and-click style, resulting in problems in interacting with the interfaces. This was especially clear for the type of action having a natural mapping to holding down the mouse-button, such as cutting or drawing lines. In summary, it appears that children have begun to expect the drag-and-drop interaction style and that deviating from this standard may result in serious usability problems.
{"title":"Children may expect drag-and-drop instead of point-and-click","authors":"W. Barendregt, M. M. Bekker","doi":"10.1145/1979742.1979764","DOIUrl":"https://doi.org/10.1145/1979742.1979764","url":null,"abstract":"In this paper we present evidence from a pilot study that children may have started to expect the drag-and-drop interaction style. This is in contrast with probably the most cited paper on this topic from 2001, stating that point-and-click is the most appropriate interaction style for children between 6 and 12 years old. Instead of providing children with information on the interaction style expected we developed two point-and-click interfaces and let children explore those interfaces themselves. Children consistently tried to apply the drag-and-drop interaction style both initially and after having discovered the point-and-click style, resulting in problems in interacting with the interfaces. This was especially clear for the type of action having a natural mapping to holding down the mouse-button, such as cutting or drawing lines. In summary, it appears that children have begun to expect the drag-and-drop interaction style and that deviating from this standard may result in serious usability problems.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134271727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amartya Banerjee, Jesse Burstyn, A. Girouard, Roel Vertegaal
We present WaveForm, a system that enables a Video Jockey (VJ) to directly manipulate video content on a large display on a stage, from a distance. WaveForm implements an in-air multitouch gesture set to layer, blend, scale, rotate, and position video content on the large display. We believe this leads to a more immersive experience for the VJ user, as well as for the audience witnessing the VJ's performance during a live event.
{"title":"WaveForm: remote video blending for VJs using in-air multitouch gestures","authors":"Amartya Banerjee, Jesse Burstyn, A. Girouard, Roel Vertegaal","doi":"10.1145/1979742.1979941","DOIUrl":"https://doi.org/10.1145/1979742.1979941","url":null,"abstract":"We present WaveForm, a system that enables a Video Jockey (VJ) to directly manipulate video content on a large display on a stage, from a distance. WaveForm implements an in-air multitouch gesture set to layer, blend, scale, rotate, and position video content on the large display. We believe this leads to a more immersive experience for the VJ user, as well as for the audience witnessing the VJ's performance during a live event.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"11 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134560775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}