M. Jeon, Jason Roberts, Parameshwaran Raman, Jung-Bin Yim, B. Walker
Considerable research has shown that diverse affective (emotional) states influence cognitive processes and performance. To detect a driver's affective states and regulate them may help increase driving performance and safety. There are some populations who are more vulnerable to issues regarding driving, affect, and affect regulation (e.g., novice drivers, young drivers, older drivers, and drivers with TBI (Traumatic Brain Injury)). This paper describes initial findings from multiple participatory design processes, including interviews with 21 young drivers, and focus groups with a TBI driver and two driver rehab specialists. Depending on user groups, there are distinct issues and needs; therefore, differentiated approaches are needed to design an in-vehicle assistive technology system for a specific target user group.
{"title":"Participatory design process for an in-vehicle affect detection and regulation system for various drivers","authors":"M. Jeon, Jason Roberts, Parameshwaran Raman, Jung-Bin Yim, B. Walker","doi":"10.1145/2049536.2049602","DOIUrl":"https://doi.org/10.1145/2049536.2049602","url":null,"abstract":"Considerable research has shown that diverse affective (emotional) states influence cognitive processes and performance. To detect a driver's affective states and regulate them may help increase driving performance and safety. There are some populations who are more vulnerable to issues regarding driving, affect, and affect regulation (e.g., novice drivers, young drivers, older drivers, and drivers with TBI (Traumatic Brain Injury)). This paper describes initial findings from multiple participatory design processes, including interviews with 21 young drivers, and focus groups with a TBI driver and two driver rehab specialists. Depending on user groups, there are distinct issues and needs; therefore, differentiated approaches are needed to design an in-vehicle assistive technology system for a specific target user group.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132322010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Student research competition","authors":"Krzysztof Z Gajos","doi":"10.1145/3253163","DOIUrl":"https://doi.org/10.1145/3253163","url":null,"abstract":"","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115903632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Language immersion from birth is crucial to a child's language development. However, language immersion can be particularly challenging for hearing parents of deaf children to provide as they may have to overcome many difficulties while learning American Sign Language (ASL). We are in the process of creating a mobile application to help hearing parents learn ASL. To this end, we have interviewed members of our target population to gain understanding of their motivations and needs when learning sign language. We found that the most common motivation for parents learning ASL is better communication with their children. Parents are most interested in acquiring more fluent sign language skills through learning to read stories to their children.
{"title":"We need to communicate!: helping hearing parents of deaf children learn american sign language","authors":"Kimberly Weaver, Thad Starner","doi":"10.1145/2049536.2049554","DOIUrl":"https://doi.org/10.1145/2049536.2049554","url":null,"abstract":"Language immersion from birth is crucial to a child's language development. However, language immersion can be particularly challenging for hearing parents of deaf children to provide as they may have to overcome many difficulties while learning American Sign Language (ASL). We are in the process of creating a mobile application to help hearing parents learn ASL. To this end, we have interviewed members of our target population to gain understanding of their motivations and needs when learning sign language. We found that the most common motivation for parents learning ASL is better communication with their children. Parents are most interested in acquiring more fluent sign language skills through learning to read stories to their children.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123985939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a poster about an alternative text entry method, based on a commercially available game controller as input device, as well as a demo of the accompanying software application. The system was originally intended for a particular gentleman with the neuromuscular disease Friedreich's Ataxia (FA), who asked us to help him - by developing an optimal keyboard replacement for him - already several years ago. Our work focused on his impressions in an initial case study testing this newest attempt. Taking the tester's comments into account, the outcome seems to be rather promising in meeting his needs, and it appears very probable that the system could be of help for anyone with a similar condition.
{"title":"Using a game controller for text entry to address abilities and disabilities specific to persons with neuromuscular diseases","authors":"T. Felzer, S. Rinderknecht","doi":"10.1145/2049536.2049616","DOIUrl":"https://doi.org/10.1145/2049536.2049616","url":null,"abstract":"This paper proposes a poster about an alternative text entry method, based on a commercially available game controller as input device, as well as a demo of the accompanying software application. The system was originally intended for a particular gentleman with the neuromuscular disease Friedreich's Ataxia (FA), who asked us to help him - by developing an optimal keyboard replacement for him - already several years ago. Our work focused on his impressions in an initial case study testing this newest attempt. Taking the tester's comments into account, the outcome seems to be rather promising in meeting his needs, and it appears very probable that the system could be of help for anyone with a similar condition.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116759214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Color vision deficiencies (CVDs) cause problems in situations where people need to differentiate the colors used in digital displays. Recoloring tools exist to reduce the problem, but these tools need a model of the user's color-differentiation ability in order to work. Situation-specific models are a recent approach that accounts for all of the factors affecting a person's CVD (including genetic, acquired, and environmental causes) by using calibration data to form the model. This approach works well, but requires repeated calibration - and the best available calibration procedure takes more than 30 minutes. To address this limitation, we have developed a new situation-specific model of human color differentiation (called ICD-2) that needs far fewer calibration trials. The new model uses a color space that better matches human color vision compared to the RGB space of the old model, and can therefore extract more meaning from each calibration test. In an empirical comparison, we found that ICD-2 is 24 times faster than the old approach, and had small but significant gains in accuracy. The efficiency of ICD-2 makes it feasible for situation-specific models of individual color differentiation to be used in the real world.
{"title":"Improving calibration time and accuracy for situation-specific models of color differentiation","authors":"David R. Flatla, C. Gutwin","doi":"10.1145/2049536.2049572","DOIUrl":"https://doi.org/10.1145/2049536.2049572","url":null,"abstract":"Color vision deficiencies (CVDs) cause problems in situations where people need to differentiate the colors used in digital displays. Recoloring tools exist to reduce the problem, but these tools need a model of the user's color-differentiation ability in order to work. Situation-specific models are a recent approach that accounts for all of the factors affecting a person's CVD (including genetic, acquired, and environmental causes) by using calibration data to form the model. This approach works well, but requires repeated calibration - and the best available calibration procedure takes more than 30 minutes. To address this limitation, we have developed a new situation-specific model of human color differentiation (called ICD-2) that needs far fewer calibration trials. The new model uses a color space that better matches human color vision compared to the RGB space of the old model, and can therefore extract more meaning from each calibration test. In an empirical comparison, we found that ICD-2 is 24 times faster than the old approach, and had small but significant gains in accuracy. The efficiency of ICD-2 makes it feasible for situation-specific models of individual color differentiation to be used in the real world.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123161000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have investigated how hand strength affects pointing performance of people with and without mobility impair-ment in graphical user interfaces for four different input modalities. We have found that grip strength and active range of motion of wrist are most indicative of the point-ing performance. We have used the study to develop a set of linear equations to predict pointing time for different devices.
{"title":"The effect of hand strength on pointing performance of users for different input devices","authors":"P. Biswas, P. Langdon","doi":"10.1145/2049536.2049611","DOIUrl":"https://doi.org/10.1145/2049536.2049611","url":null,"abstract":"We have investigated how hand strength affects pointing performance of people with and without mobility impair-ment in graphical user interfaces for four different input modalities. We have found that grip strength and active range of motion of wrist are most indicative of the point-ing performance. We have used the study to develop a set of linear equations to predict pointing time for different devices.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123691832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a multiple-view platform for low vision students that utilizes students' personal smart phone cameras and tablets in the classroom. Low vision or deaf students can independently use the platform to obtain flexible, magnified views of lecture visuals, such as the presentation slides or whiteboard on their personal screen. This platform also enables cooperation among sighted and hearing classmates to provide better views for everyone, including themselves.
{"title":"Multi-view platform: an accessible live classroom viewing approach for low vision students","authors":"R. Kushalnagar, Stephanie A. Ludi, P. Kushalnagar","doi":"10.1145/2049536.2049600","DOIUrl":"https://doi.org/10.1145/2049536.2049600","url":null,"abstract":"We present a multiple-view platform for low vision students that utilizes students' personal smart phone cameras and tablets in the classroom. Low vision or deaf students can independently use the platform to obtain flexible, magnified views of lecture visuals, such as the presentation slides or whiteboard on their personal screen. This platform also enables cooperation among sighted and hearing classmates to provide better views for everyone, including themselves.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128539721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Animations of American Sign Language (ASL) and Pidgin Signed English (PSE) have accessibility benefits for many signers with lower levels of written language literacy. In prior experimental studies we conducted evaluating animations of ASL, native signers gave informal feedback in which they critiqued the insufficient and inaccurate facial expressions of the virtual human character. While face movements are important for conveying grammatical and prosodic information in human ASL signing, no empirical evaluation of their impact on the understandability and perceived quality of ASL animations had previously been conducted. To quantify the suggestions of deaf participants in our prior studies, we experimentally evaluated ASL and PSE animations with and without various types of facial expressions, and we found that their inclusion does lead to measurable benefits for the understandability and perceived quality of the animations. This finding provides motivation for our future work on facial expressions in ASL and PSE animations, and it lays a novel methodological groundwork for evaluating the quality of facial expressions for conveying prosodic or grammatical information.
{"title":"Evaluating importance of facial expression in american sign language and pidgin signed english animations","authors":"Matt Huenerfauth, Pengfei Lu, A. Rosenberg","doi":"10.1145/2049536.2049556","DOIUrl":"https://doi.org/10.1145/2049536.2049556","url":null,"abstract":"Animations of American Sign Language (ASL) and Pidgin Signed English (PSE) have accessibility benefits for many signers with lower levels of written language literacy. In prior experimental studies we conducted evaluating animations of ASL, native signers gave informal feedback in which they critiqued the insufficient and inaccurate facial expressions of the virtual human character. While face movements are important for conveying grammatical and prosodic information in human ASL signing, no empirical evaluation of their impact on the understandability and perceived quality of ASL animations had previously been conducted. To quantify the suggestions of deaf participants in our prior studies, we experimentally evaluated ASL and PSE animations with and without various types of facial expressions, and we found that their inclusion does lead to measurable benefits for the understandability and perceived quality of the animations. This finding provides motivation for our future work on facial expressions in ASL and PSE animations, and it lays a novel methodological groundwork for evaluating the quality of facial expressions for conveying prosodic or grammatical information.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124138739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we show how a large demographic data set that includes only high-level information about health and disability can be used to specify user requirements for people with specific needs and impairments. As a case study, we consider adapting spoken dialogue systems (SDS) to the needs of older adults. Such interfaces are becoming increasingly prevalent in telecare and home care, where they will often be used by older adults. As our data set, we chose the English Longitudinal Survey of Ageing (ELSA), a large representative survey of the health, wellbeing, and socioeconomic status of English older adults. In an inclusion audit, we show that one in four older people surveyed by ELSA might benefit from SDS due to problems with dexterity, mobility, vision, or literacy. Next, we examine the technology that is available to our target users (technology audit) and estimate factors that might prevent older people from using SDS (exclusion audit). We conclude that while SDS are ideal for solutions that are delivered on the near ubiquitous landlines, they need to be accessible for people with mild to moderate hearing problems, and thus multimodal solutions should be based on the television, a technology even more widespread than landlines.
{"title":"Leveraging large data sets for user requirements analysis","authors":"M. Wolters, Vicki L. Hanson, Johanna D. Moore","doi":"10.1145/2049536.2049550","DOIUrl":"https://doi.org/10.1145/2049536.2049550","url":null,"abstract":"In this paper, we show how a large demographic data set that includes only high-level information about health and disability can be used to specify user requirements for people with specific needs and impairments. As a case study, we consider adapting spoken dialogue systems (SDS) to the needs of older adults. Such interfaces are becoming increasingly prevalent in telecare and home care, where they will often be used by older adults. As our data set, we chose the English Longitudinal Survey of Ageing (ELSA), a large representative survey of the health, wellbeing, and socioeconomic status of English older adults. In an inclusion audit, we show that one in four older people surveyed by ELSA might benefit from SDS due to problems with dexterity, mobility, vision, or literacy. Next, we examine the technology that is available to our target users (technology audit) and estimate factors that might prevent older people from using SDS (exclusion audit). We conclude that while SDS are ideal for solutions that are delivered on the near ubiquitous landlines, they need to be accessible for people with mild to moderate hearing problems, and thus multimodal solutions should be based on the television, a technology even more widespread than landlines.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125214682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Blind people want to take photographs for the same reasons as others -- to record important events, to share experiences, and as an outlet for artistic expression. Furthermore, both automatic computer vision technology and human-powered services can be used to give blind people feedback on their environment, but to work their best these systems need high-quality photos as input. In this paper, we present the results of a large survey that shows how blind people are currently using cameras. Next, we introduce EasySnap, an application that provides audio feedback to help blind people take pictures of objects and people and show that blind photographers take better photographs with this feedback. We then discuss how we iterated on the portrait functionality to create a new application called PortraitFramer designed specifically for this function. Finally, we present the results of an in-depth study with 15 blind and low-vision participants, showing that they could pick up how to successfully use the application very quickly.
{"title":"Supporting blind photography","authors":"C. Jayant, H. Ji, Samuel White, Jeffrey P. Bigham","doi":"10.1145/2049536.2049573","DOIUrl":"https://doi.org/10.1145/2049536.2049573","url":null,"abstract":"Blind people want to take photographs for the same reasons as others -- to record important events, to share experiences, and as an outlet for artistic expression. Furthermore, both automatic computer vision technology and human-powered services can be used to give blind people feedback on their environment, but to work their best these systems need high-quality photos as input. In this paper, we present the results of a large survey that shows how blind people are currently using cameras. Next, we introduce EasySnap, an application that provides audio feedback to help blind people take pictures of objects and people and show that blind photographers take better photographs with this feedback. We then discuss how we iterated on the portrait functionality to create a new application called PortraitFramer designed specifically for this function. Finally, we present the results of an in-depth study with 15 blind and low-vision participants, showing that they could pick up how to successfully use the application very quickly.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130877428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}