People with visual impairments or blindness (VIB) face many problems when they enter unfamiliar areas by themselves. To address this problem, we aim to enable people with VIB to walk alone, even in unfamiliar areas. We propose a navigation method that enables people with VIB to imagine structures such as staircases easily and thus move safely when walking alone even in unfamiliar areas. An experiment is conducted with six participants with VIB walking up or down stairs with four different structures in an indoor environment. Its results verify that the proposed method could provide appropriate amounts of guidance messages and convey the messages in a safer manner than the existing method.
{"title":"A Navigation Method for Visually Impaired People: Easy to Imagine the Structure of the Stairs","authors":"Asuka Miyake, Misa Hirao, Mitsuhiro Goto, Chihiro Takayama, Masahiro Watanabe, Hiroya Minami","doi":"10.1145/3373625.3418002","DOIUrl":"https://doi.org/10.1145/3373625.3418002","url":null,"abstract":"People with visual impairments or blindness (VIB) face many problems when they enter unfamiliar areas by themselves. To address this problem, we aim to enable people with VIB to walk alone, even in unfamiliar areas. We propose a navigation method that enables people with VIB to imagine structures such as staircases easily and thus move safely when walking alone even in unfamiliar areas. An experiment is conducted with six participants with VIB walking up or down stairs with four different structures in an indoor environment. Its results verify that the proposed method could provide appropriate amounts of guidance messages and convey the messages in a safer manner than the existing method.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129970711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danielle Bragg, Oscar Koller, Naomi K. Caselli, W. Thies
As machine learning algorithms continue to improve, collecting training data becomes increasingly valuable. At the same time, increased focus on data collection may introduce compounding privacy concerns. Accessibility projects in particular may put vulnerable populations at risk, as disability status is sensitive, and collecting data from small populations limits anonymity. To help address privacy concerns while maintaining algorithmic performance on machine learning tasks, we propose privacy-enhancing distortions of training datasets. We explore this idea through the lens of sign language video collection, which is crucial for advancing sign language recognition and translation. We present a web study exploring signers’ concerns in contributing to video corpora and their attitudes about using filters, and a computer vision experiment exploring sign language recognition performance with filtered data. Our results suggest that privacy concerns may exist in contributing to sign language corpora, that filters (especially expressive avatars and blurred faces) may impact willingness to participate, and that training on more filtered data may boost recognition accuracy in some cases.
{"title":"Exploring Collection of Sign Language Datasets: Privacy, Participation, and Model Performance","authors":"Danielle Bragg, Oscar Koller, Naomi K. Caselli, W. Thies","doi":"10.1145/3373625.3417024","DOIUrl":"https://doi.org/10.1145/3373625.3417024","url":null,"abstract":"As machine learning algorithms continue to improve, collecting training data becomes increasingly valuable. At the same time, increased focus on data collection may introduce compounding privacy concerns. Accessibility projects in particular may put vulnerable populations at risk, as disability status is sensitive, and collecting data from small populations limits anonymity. To help address privacy concerns while maintaining algorithmic performance on machine learning tasks, we propose privacy-enhancing distortions of training datasets. We explore this idea through the lens of sign language video collection, which is crucial for advancing sign language recognition and translation. We present a web study exploring signers’ concerns in contributing to video corpora and their attitudes about using filters, and a computer vision experiment exploring sign language recognition performance with filtered data. Our results suggest that privacy concerns may exist in contributing to sign language corpora, that filters (especially expressive avatars and blurred faces) may impact willingness to participate, and that training on more filtered data may boost recognition accuracy in some cases.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130888318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ru Guo, Yiru Yang, Johnson Kuang, Xue Bin, D. Jain, Steven M. Goodman, Leah Findlater, Jon E. Froehlich
Head-mounted displays can provide private and glanceable speech and sound feedback to deaf and hard of hearing people, yet prior systems have largely focused on speech transcription. We introduce HoloSound, a HoloLens-based augmented reality (AR) prototype that uses deep learning to classify and visualize sound identity and location in addition to providing speech transcription. This poster paper presents a working proof-of-concept prototype, and discusses future opportunities for advancing AR-based sound awareness.
{"title":"HoloSound: Combining Speech and Sound Identification for Deaf or Hard of Hearing Users on a Head-mounted Display","authors":"Ru Guo, Yiru Yang, Johnson Kuang, Xue Bin, D. Jain, Steven M. Goodman, Leah Findlater, Jon E. Froehlich","doi":"10.1145/3373625.3418031","DOIUrl":"https://doi.org/10.1145/3373625.3418031","url":null,"abstract":"Head-mounted displays can provide private and glanceable speech and sound feedback to deaf and hard of hearing people, yet prior systems have largely focused on speech transcription. We introduce HoloSound, a HoloLens-based augmented reality (AR) prototype that uses deep learning to classify and visualize sound identity and location in addition to providing speech transcription. This poster paper presents a working proof-of-concept prototype, and discusses future opportunities for advancing AR-based sound awareness.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131116757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Begel, John C. Tang, Sean Andrist, Michael Barnett, Tony Carbary, Piali Choudhury, Edward Cutrell, Alberto Fung, Sasa Junuzovic, Daniel J. McDuff, Kael Rowan, Shibashankar Sahoo, Jennifer Frances Waldern, Jessica Wolk, Hui Zheng, Annuska Zolyomi
Through an iterative design process using Wizard of Oz (WOz) prototypes, we designed a video calling application for people with Autism Spectrum Disorder. Our Video Calling for Autism prototype provided an Expressiveness Mirror that gave feedback to autistic people on how their facial expressions might be interpreted by their neurotypical conversation partners. This feedback was in the form of emojis representing six emotions and a bar indicating the amount of overall expressiveness demonstrated by the user. However, when we built a working prototype and conducted a user study with autistic participants, their negative feedback caused us to reconsider how our design process led to a prototype that they did not find useful. We reflect on the design challenges around developing AI technology for an autistic user population, how Wizard of Oz prototypes can be overly optimistic in representing AI-driven prototypes, how autistic research participants can respond differently to user experience prototypes of varying fidelity, and how designing for people with diverse abilities needs to include that population in the development process.
{"title":"Lessons Learned in Designing AI for Autistic Adults","authors":"Andrew Begel, John C. Tang, Sean Andrist, Michael Barnett, Tony Carbary, Piali Choudhury, Edward Cutrell, Alberto Fung, Sasa Junuzovic, Daniel J. McDuff, Kael Rowan, Shibashankar Sahoo, Jennifer Frances Waldern, Jessica Wolk, Hui Zheng, Annuska Zolyomi","doi":"10.1145/3373625.3418305","DOIUrl":"https://doi.org/10.1145/3373625.3418305","url":null,"abstract":"Through an iterative design process using Wizard of Oz (WOz) prototypes, we designed a video calling application for people with Autism Spectrum Disorder. Our Video Calling for Autism prototype provided an Expressiveness Mirror that gave feedback to autistic people on how their facial expressions might be interpreted by their neurotypical conversation partners. This feedback was in the form of emojis representing six emotions and a bar indicating the amount of overall expressiveness demonstrated by the user. However, when we built a working prototype and conducted a user study with autistic participants, their negative feedback caused us to reconsider how our design process led to a prototype that they did not find useful. We reflect on the design challenges around developing AI technology for an autistic user population, how Wizard of Oz prototypes can be overly optimistic in representing AI-driven prototypes, how autistic research participants can respond differently to user experience prototypes of varying fidelity, and how designing for people with diverse abilities needs to include that population in the development process.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132844202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristen Shinohara, Michael J. McQuaid, Nayeri Jacobo
Increasingly, support for students with disabilities in post-secondary education has boosted enrollment and graduates rates. Yet, such successes are not translated to doctoral degrees. For example, in 2018, the National Science Foundation reported 3% of math and computer science doctorate recipients identified as having a visual limitation while 1.2% identified as having a hearing limitation. To better understand why few students with disabilities pursue PhDs in computing and related fields, we conducted an interview study with 19 current and former graduate students who identified as blind or low vision, or deaf or hard of hearing. We asked participants about challenges or barriers they encountered in graduate school. We asked about accommodations they received, or did not receive, and about different forms of support. We found that a wide range of inaccessibility issues in research, courses, and in managing accommodations impacted student progress. Contributions from this work include identifying two forms of access inequality that emerged: (1) access differential: the gap between the access that non/disabled students experience, and (2) inequitable access: the degree of inadequacy of existing accommodations to address inaccessibility.
{"title":"Access Differential and Inequitable Access: Inaccessibility for Doctoral Students in Computing","authors":"Kristen Shinohara, Michael J. McQuaid, Nayeri Jacobo","doi":"10.1145/3373625.3416989","DOIUrl":"https://doi.org/10.1145/3373625.3416989","url":null,"abstract":"Increasingly, support for students with disabilities in post-secondary education has boosted enrollment and graduates rates. Yet, such successes are not translated to doctoral degrees. For example, in 2018, the National Science Foundation reported 3% of math and computer science doctorate recipients identified as having a visual limitation while 1.2% identified as having a hearing limitation. To better understand why few students with disabilities pursue PhDs in computing and related fields, we conducted an interview study with 19 current and former graduate students who identified as blind or low vision, or deaf or hard of hearing. We asked participants about challenges or barriers they encountered in graduate school. We asked about accommodations they received, or did not receive, and about different forms of support. We found that a wide range of inaccessibility issues in research, courses, and in managing accommodations impacted student progress. Contributions from this work include identifying two forms of access inequality that emerged: (1) access differential: the gap between the access that non/disabled students experience, and (2) inequitable access: the degree of inadequacy of existing accommodations to address inaccessibility.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133694891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lia Carrari, Rain Michaels, Ajit Narayanan, Lei Shi, Xiang Xiao
Mobile technology has become an indispensable part of our daily lives. From home automation to digital entertainment, we rely on mobile technology to progress through our daily routines. However, mobile technology requires complex interactions and nontrivial cognitive efforts to use, and is often inaccessible to people with cognitive disabilities. With this in mind, we designed Action Blocks, an application that provides one-tap access to digital services on Android. A user and/or their caregiver can configure an Action Block with customized commands, such as calling a certain person, turning on the lights. The Action Block is associated with a memorable image (e.g., a photo of the person to call, an icon of a lightbulb) and placed on the device home screen as a one-tap button, as shown in Figure 1. Action Blocks was launched in May 2020 and received much useful feedback. In this demonstration, we report the key design considerations of Action Blocks as well as the lessons we learned from user feedback.
{"title":"Action Blocks: Making Mobile Technology Accessible for People with Cognitive Disabilities","authors":"Lia Carrari, Rain Michaels, Ajit Narayanan, Lei Shi, Xiang Xiao","doi":"10.1145/3373625.3418043","DOIUrl":"https://doi.org/10.1145/3373625.3418043","url":null,"abstract":"Mobile technology has become an indispensable part of our daily lives. From home automation to digital entertainment, we rely on mobile technology to progress through our daily routines. However, mobile technology requires complex interactions and nontrivial cognitive efforts to use, and is often inaccessible to people with cognitive disabilities. With this in mind, we designed Action Blocks, an application that provides one-tap access to digital services on Android. A user and/or their caregiver can configure an Action Block with customized commands, such as calling a certain person, turning on the lights. The Action Block is associated with a memorable image (e.g., a photo of the person to call, an icon of a lightbulb) and placed on the device home screen as a one-tap button, as shown in Figure 1. Action Blocks was launched in May 2020 and received much useful feedback. In this demonstration, we report the key design considerations of Action Blocks as well as the lessons we learned from user feedback.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132698523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dafne Zuleima Morgado Ramirez, G. Barbareschi, M. Donovan-Hall, Mohammad Sobuh, Nida' Elayyan, Brenda T. Nakandi, R. Ssekitoleko, J. Olenja, G. Magomere, Sibylle Daymond, Jake Honeywill, Ian Harris, N. Mbugua, L. Kenney, C. Holloway
80% of people with disabilities worldwide live in low resourced settings, rural areas, informal settlements and in multidimensional poverty. ICT4D leverages technological innovations to deliver programs for international development. But very few do so with a focus on and involving people with disabilities in low resource settings. Also, most studies largely focus on publishing the results of the research with a focus on the positive stories and not the learnings and recommendations regarding research processes. In short, researchers rarely examine what was challenging in the process of collaboration. We present reflections from the field across four studies. Our contributions are: (1) an overview of past work in computing with a focus on disability in low resource settings and (2) learnings and recommendations from four collaborative projects in Uganda, Jordan and Kenya over the last two years, that are relevant for future HCI studies in low resource settings with communities with disabilities. We do this through a lens of Disability Interaction and ICT4D.
{"title":"Disability design and innovation in computing research in low resource settings","authors":"Dafne Zuleima Morgado Ramirez, G. Barbareschi, M. Donovan-Hall, Mohammad Sobuh, Nida' Elayyan, Brenda T. Nakandi, R. Ssekitoleko, J. Olenja, G. Magomere, Sibylle Daymond, Jake Honeywill, Ian Harris, N. Mbugua, L. Kenney, C. Holloway","doi":"10.1145/3373625.3417301","DOIUrl":"https://doi.org/10.1145/3373625.3417301","url":null,"abstract":"80% of people with disabilities worldwide live in low resourced settings, rural areas, informal settlements and in multidimensional poverty. ICT4D leverages technological innovations to deliver programs for international development. But very few do so with a focus on and involving people with disabilities in low resource settings. Also, most studies largely focus on publishing the results of the research with a focus on the positive stories and not the learnings and recommendations regarding research processes. In short, researchers rarely examine what was challenging in the process of collaboration. We present reflections from the field across four studies. Our contributions are: (1) an overview of past work in computing with a focus on disability in low resource settings and (2) learnings and recommendations from four collaborative projects in Uganda, Jordan and Kenya over the last two years, that are relevant for future HCI studies in low resource settings with communities with disabilities. We do this through a lens of Disability Interaction and ICT4D.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114433970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nelson Daniel Troncoso Aldas, Sooyeon Lee, Chonghan Lee, M. Rosson, John Millar Carroll, N. Vijaykrishnan
Locating and grasping objects is a critical task in people’s daily lives. For people with visual impairments, this task can be a daily struggle. The support of augmented reality frameworks in smartphones has the potential to overcome the limitations of current object detection applications designed for people with visual impairments. We present AIGuide, a self-contained offline smartphone application that leverages augmented reality technology to help users locate and pick up objects around them. We conducted a user study to validate its effectiveness at providing guidance, compare it to other assistive technology form factors, evaluate the use of multimodal feedback, and provide feedback about the overall experience. Our results show that AIGuide is a promising technology to help people with visual impairments locate and acquire objects in their daily routine.
{"title":"AIGuide: An Augmented Reality Hand Guidance Application for People with Visual Impairments","authors":"Nelson Daniel Troncoso Aldas, Sooyeon Lee, Chonghan Lee, M. Rosson, John Millar Carroll, N. Vijaykrishnan","doi":"10.1145/3373625.3417028","DOIUrl":"https://doi.org/10.1145/3373625.3417028","url":null,"abstract":"Locating and grasping objects is a critical task in people’s daily lives. For people with visual impairments, this task can be a daily struggle. The support of augmented reality frameworks in smartphones has the potential to overcome the limitations of current object detection applications designed for people with visual impairments. We present AIGuide, a self-contained offline smartphone application that leverages augmented reality technology to help users locate and pick up objects around them. We conducted a user study to validate its effectiveness at providing guidance, compare it to other assistive technology form factors, evaluate the use of multimodal feedback, and provide feedback about the overall experience. Our results show that AIGuide is a promising technology to help people with visual impairments locate and acquire objects in their daily routine.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115006336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiwen Zhao, Vaishnavi Mande, Paula Conn, Sedeeq Al-khazraji, Kristen Shinohara, S. Ludi, Matt Huenerfauth
With an increasing demand for computing professionals with skills in accessibility, it is important for university faculty to select effective methods for educating computing students about barriers faced by users with disabilities and approaches to improving accessibility. While some prior work had evaluated accessibility educational interventions, many prior studies have consisted of firsthand reports from faculty or short-term evaluations. This paper reports on the results of a systematic evaluation of methods for teaching accessibility from a longitudinal study across 29 sections of a human-computer interaction course (required for students in a computing degree program), as taught by 10 distinct professors, throughout four years, with over 400 students. A control condition (course without accessibility content) was compared to four intervention conditions: week of lectures on accessibility, team design project requiring some accessibility consideration, interaction with someone with a disability, and collaboration with a team member with a disability. Comparing survey data immediately before and after the course, we found that the Lectures, Projects, and Interaction conditions were effective in increasing students' likelihood to consider people with disabilities on a design scenario, awareness of accessibility barriers, and knowledge of technical approaches for improving accessibility - with students in the Team Member condition having higher scores on the final measure only. However, comparing survey responses from students immediately before the course and from approximately 2 years later, almost no significant gains were observed, suggesting that interventions within a single course are insufficient for producing long-term changes in measures of students’ accessibility learning. This study contributes to empirical knowledge to inform university faculty in selecting effective methods for teaching accessibility, and it motivates further research on how to achieve long-term changes in accessibility knowledge, e.g. by reinforcing accessibility throughout a degree program.
{"title":"Comparison of Methods for Teaching Accessibility in University Computing Courses","authors":"Qiwen Zhao, Vaishnavi Mande, Paula Conn, Sedeeq Al-khazraji, Kristen Shinohara, S. Ludi, Matt Huenerfauth","doi":"10.1145/3373625.3417013","DOIUrl":"https://doi.org/10.1145/3373625.3417013","url":null,"abstract":"With an increasing demand for computing professionals with skills in accessibility, it is important for university faculty to select effective methods for educating computing students about barriers faced by users with disabilities and approaches to improving accessibility. While some prior work had evaluated accessibility educational interventions, many prior studies have consisted of firsthand reports from faculty or short-term evaluations. This paper reports on the results of a systematic evaluation of methods for teaching accessibility from a longitudinal study across 29 sections of a human-computer interaction course (required for students in a computing degree program), as taught by 10 distinct professors, throughout four years, with over 400 students. A control condition (course without accessibility content) was compared to four intervention conditions: week of lectures on accessibility, team design project requiring some accessibility consideration, interaction with someone with a disability, and collaboration with a team member with a disability. Comparing survey data immediately before and after the course, we found that the Lectures, Projects, and Interaction conditions were effective in increasing students' likelihood to consider people with disabilities on a design scenario, awareness of accessibility barriers, and knowledge of technical approaches for improving accessibility - with students in the Team Member condition having higher scores on the final measure only. However, comparing survey responses from students immediately before the course and from approximately 2 years later, almost no significant gains were observed, suggesting that interventions within a single course are insufficient for producing long-term changes in measures of students’ accessibility learning. This study contributes to empirical knowledge to inform university faculty in selecting effective methods for teaching accessibility, and it motivates further research on how to achieve long-term changes in accessibility knowledge, e.g. by reinforcing accessibility throughout a degree program.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129587341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muratcan Cicek, Ankit Dave, Wenxin Feng, Michael Xuelin Huang, J. Haines, Jeffrey Nichols
Head-based pointing is an alternative input method for people with motor impairments to access computing devices. This paper proposes a calibration-free head-tracking input mechanism for mobile devices that makes use of the front-facing camera that is standard on most devices. To evaluate our design, we performed two Fitts’ Law studies. First, a comparison study of our method with an existing head-based pointing solution, Eva Facial Mouse, with subjects without motor impairments. Second, we conducted what we believe is the first Fitts’ Law study using a mobile head tracker with subjects with motor impairments. We extend prior studies with a greater range of index of difficulties (IDs) [1.62, 5.20] bits and achieved promising throughput (average 0.61 bps with motor impairments and 0.90 bps without). We found that users’ throughput was 0.95 bps on average in our most difficult task (IDs: 5.20 bits), which involved selecting a target half the size of the Android recommendation for a touch target after moving nearly the full height of the screen. This suggests the system is capable of fine precision tasks. We summarize our observations and the lessons from our user studies into a set of design guidelines for head-based pointing systems.
{"title":"Designing and Evaluating Head-based Pointing on Smartphones for People with Motor Impairments","authors":"Muratcan Cicek, Ankit Dave, Wenxin Feng, Michael Xuelin Huang, J. Haines, Jeffrey Nichols","doi":"10.1145/3373625.3416994","DOIUrl":"https://doi.org/10.1145/3373625.3416994","url":null,"abstract":"Head-based pointing is an alternative input method for people with motor impairments to access computing devices. This paper proposes a calibration-free head-tracking input mechanism for mobile devices that makes use of the front-facing camera that is standard on most devices. To evaluate our design, we performed two Fitts’ Law studies. First, a comparison study of our method with an existing head-based pointing solution, Eva Facial Mouse, with subjects without motor impairments. Second, we conducted what we believe is the first Fitts’ Law study using a mobile head tracker with subjects with motor impairments. We extend prior studies with a greater range of index of difficulties (IDs) [1.62, 5.20] bits and achieved promising throughput (average 0.61 bps with motor impairments and 0.90 bps without). We found that users’ throughput was 0.95 bps on average in our most difficult task (IDs: 5.20 bits), which involved selecting a target half the size of the Android recommendation for a touch target after moving nearly the full height of the screen. This suggests the system is capable of fine precision tasks. We summarize our observations and the lessons from our user studies into a set of design guidelines for head-based pointing systems.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129074720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}