When playing musical instruments, deaf and hard-of-hearing (DHH) people typically sense their music from the vibrations transmitted by the instruments or the movements of their bodies while performing. Sensory substitution devices now exist that convert sounds into light and vibrations to support DHH people’s musical activities. However, these devices require specialized hardware, and the marketing profiles assume that standard musical instruments are available. Hence, a significant gap remains between DHH people and their musical performance enjoyment. To address this issue, this study identifies end users’ preferred gestures when using smartphones to emulate the musical experience based on the instrument selected. This gesture elicitation study applies 10 instrument types. Herein, we present the results and a new taxonomy of musical instrument gestures. The findings will support the design of gesture-based instrument interfaces to enable DHH people to more directly enjoy their musical performances.
{"title":"Designing Gestures for Digital Musical Instruments: Gesture Elicitation Study with Deaf and Hard of Hearing People","authors":"Ryo Iijima, Akihisa Shitara, Y. Ochiai","doi":"10.1145/3517428.3544828","DOIUrl":"https://doi.org/10.1145/3517428.3544828","url":null,"abstract":"When playing musical instruments, deaf and hard-of-hearing (DHH) people typically sense their music from the vibrations transmitted by the instruments or the movements of their bodies while performing. Sensory substitution devices now exist that convert sounds into light and vibrations to support DHH people’s musical activities. However, these devices require specialized hardware, and the marketing profiles assume that standard musical instruments are available. Hence, a significant gap remains between DHH people and their musical performance enjoyment. To address this issue, this study identifies end users’ preferred gestures when using smartphones to emulate the musical experience based on the instrument selected. This gesture elicitation study applies 10 instrument types. Herein, we present the results and a new taxonomy of musical instrument gestures. The findings will support the design of gesture-based instrument interfaces to enable DHH people to more directly enjoy their musical performances.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129770637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Art has deep connections with both disability and HCI research. From disabled bodies becoming avatars of novel forms of expression, to artistic work being created as an act of resistance, art has been a powerful tool to subvert ableist narratives. Artistic practices have also helped to inspire, innovate and push the boundaries of HCI, giving rise to new technologies and interaction possibilities. Our paper presents the exploration of the experiences and practices of 17 artists who used wheelchairs for mobility. Through the thematic analysis of interviews, we conceptualize three themes: (1) Personal journeys through art and disability; (2) Social encounters through art, (3) Skills and technology in art making. From these themes, we reflect on how art can help HCI researchers to capture the complexity of the experiences of disability and assistive technology use and how collaboration with disabled artists could help to rethink the design of disruptive artistic technologies.
{"title":"Assistive or Artistic Technologies? Exploring the Connections between Art, Disability and Wheelchair Use","authors":"G. Barbareschi, M. Inakage","doi":"10.1145/3517428.3544799","DOIUrl":"https://doi.org/10.1145/3517428.3544799","url":null,"abstract":"Art has deep connections with both disability and HCI research. From disabled bodies becoming avatars of novel forms of expression, to artistic work being created as an act of resistance, art has been a powerful tool to subvert ableist narratives. Artistic practices have also helped to inspire, innovate and push the boundaries of HCI, giving rise to new technologies and interaction possibilities. Our paper presents the exploration of the experiences and practices of 17 artists who used wheelchairs for mobility. Through the thematic analysis of interviews, we conceptualize three themes: (1) Personal journeys through art and disability; (2) Social encounters through art, (3) Skills and technology in art making. From these themes, we reflect on how art can help HCI researchers to capture the complexity of the experiences of disability and assistive technology use and how collaboration with disabled artists could help to rethink the design of disruptive artistic technologies.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125688874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leona Holloway, S. Ananthanarayan, M. Butler, Madhuka De Silva, K. Ellis, Cagatay Goncu, Kate Stephens, K. Marriott
People who are blind rely on touch and hearing to understand the world around them, however it is extremely difficult to understand movement through these modes. The advent of refreshable tactile displays (RTDs) offers the potential for blind people to access tactile animations for the very first time. A survey of touch readers and vision accessibility experts revealed a high level of enthusiasm for tactile animations, particularly those relating to education, mapping and concept development. Based on these suggestions, a range of tactile animations were developed and four were presented to 12 touch readers. The RTD held advantages over traditional tactile graphics for conveying movement, depth and height, however there were trade-offs in terms of resolution and textural properties. This work offers a first glimpse into how refreshable tactile displays can best be utilised to convey animated graphics for people who are blind.
{"title":"Animations at Your Fingertips: Using a Refreshable Tactile Display to Convey Motion Graphics for People who are Blind or have Low Vision","authors":"Leona Holloway, S. Ananthanarayan, M. Butler, Madhuka De Silva, K. Ellis, Cagatay Goncu, Kate Stephens, K. Marriott","doi":"10.1145/3517428.3544797","DOIUrl":"https://doi.org/10.1145/3517428.3544797","url":null,"abstract":"People who are blind rely on touch and hearing to understand the world around them, however it is extremely difficult to understand movement through these modes. The advent of refreshable tactile displays (RTDs) offers the potential for blind people to access tactile animations for the very first time. A survey of touch readers and vision accessibility experts revealed a high level of enthusiasm for tactile animations, particularly those relating to education, mapping and concept development. Based on these suggestions, a range of tactile animations were developed and four were presented to 12 touch readers. The RTD held advantages over traditional tactile graphics for conveying movement, depth and height, however there were trade-offs in terms of resolution and textural properties. This work offers a first glimpse into how refreshable tactile displays can best be utilised to convey animated graphics for people who are blind.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132996881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lauren Race, K. El-Amin, Sarah Anoke, A. Hayward, Amber James, Amy Hurst, Audrey Davis, Theresa Mershon
Earcons are a critical auditory modality for those who perceive information best through sound. Yet earcons can trigger sensory sensitivities with neurodivergent individuals, causing pain or discomfort and creating barriers to information access. They must be carefully designed with neurodivergent representation in the design process to minimize the harm they impose. To address these challenges, we conduct a study on Twitter, a social media platform with frequent earcons, to understand how to design sensory-sensitive earcons for neurodivergent individuals. We present the results of our qualitative interviews with nine neurodivergent Twitter users, uncovering six key themes for designing sensory-sensitive earcons. Based on our findings, we offer a set of novel guidelines for practitioners to design sensory-sensitive earcons for accessibility.
{"title":"Understanding Design Preferences for Sensory-Sensitive Earcons with Neurodivergent Individuals","authors":"Lauren Race, K. El-Amin, Sarah Anoke, A. Hayward, Amber James, Amy Hurst, Audrey Davis, Theresa Mershon","doi":"10.1145/3517428.3550365","DOIUrl":"https://doi.org/10.1145/3517428.3550365","url":null,"abstract":"Earcons are a critical auditory modality for those who perceive information best through sound. Yet earcons can trigger sensory sensitivities with neurodivergent individuals, causing pain or discomfort and creating barriers to information access. They must be carefully designed with neurodivergent representation in the design process to minimize the harm they impose. To address these challenges, we conduct a study on Twitter, a social media platform with frequent earcons, to understand how to design sensory-sensitive earcons for neurodivergent individuals. We present the results of our qualitative interviews with nine neurodivergent Twitter users, uncovering six key themes for designing sensory-sensitive earcons. Based on our findings, we offer a set of novel guidelines for practitioners to design sensory-sensitive earcons for accessibility.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133152038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Block-based programming environments pose a challenge for people with upper-limb motor impairments. This is because they are highly dependent on the physical manipulation of a mouse or keyboard to drag and drop elements on the screen. Our research aims to make the block-based programming environment Blockly, accessible to users with upper limb motor impairments by adding voice as an alternative input modality. This voice-enabled version of Blockly will reduce the need for the use of a pointing device, thus increasing access for people with limited dexterity. The Voice-enabled Blockly system consists of the Blockly application, a speech recognition API, predefined voice commands, and a custom function. A usability study was conducted using a prototype of Voice-enabled Blockly. The results revealed that people with upper-limb motor impairments can use the system. However, it also exposed some shortcomings of the tool and gave some suggestions on how to fix them. Based on the findings, changes will be made to the system, and then, it will be evaluated in another user study in the near future.
{"title":"Voice-Enabled Blockly: Usability Impressions of a Speech-driven Block-based Programming System","authors":"Obianuju Okafor, S. Ludi","doi":"10.1145/3517428.3550382","DOIUrl":"https://doi.org/10.1145/3517428.3550382","url":null,"abstract":"Block-based programming environments pose a challenge for people with upper-limb motor impairments. This is because they are highly dependent on the physical manipulation of a mouse or keyboard to drag and drop elements on the screen. Our research aims to make the block-based programming environment Blockly, accessible to users with upper limb motor impairments by adding voice as an alternative input modality. This voice-enabled version of Blockly will reduce the need for the use of a pointing device, thus increasing access for people with limited dexterity. The Voice-enabled Blockly system consists of the Blockly application, a speech recognition API, predefined voice commands, and a custom function. A usability study was conducted using a prototype of Voice-enabled Blockly. The results revealed that people with upper-limb motor impairments can use the system. However, it also exposed some shortcomings of the tool and gave some suggestions on how to fix them. Based on the findings, changes will be made to the system, and then, it will be evaluated in another user study in the near future.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131846502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Mathur, Kunal Dhodapkar, Tamara Zubatiy, Jiachen Li, Brian D. Jones, Elizabeth D. Mynatt
Improving medication management for older adults with Mild Cognitive Impairment (MCI) requires designing systems that support functional independence and provide compensatory strategies as their abilities change. Traditional medication management interventions emphasize forming new habits alongside the traditional path of learning to use new technologies. In this study, we navigate designing for older adults with gradual cognitive decline by creating a conversational “check-in” system for routine medication management. We present the design of MATCHA - Medication Action To Check-In for Health Application, informed by exploratory focus groups and design sessions conducted with older adults with MCI and their caregivers, alongside our evaluation based on a two-phased deployment period of 20 weeks. Our results indicate that a conversational “check-in” medication management assistant increased system acceptance while also potentially decreasing the likelihood of accidental over-medication, a common concern for older adults dealing with MCI.
{"title":"A Collaborative Approach to Support Medication Management in Older Adults with Mild Cognitive Impairment Using Conversational Assistants (CAs)","authors":"N. Mathur, Kunal Dhodapkar, Tamara Zubatiy, Jiachen Li, Brian D. Jones, Elizabeth D. Mynatt","doi":"10.1145/3517428.3544830","DOIUrl":"https://doi.org/10.1145/3517428.3544830","url":null,"abstract":"Improving medication management for older adults with Mild Cognitive Impairment (MCI) requires designing systems that support functional independence and provide compensatory strategies as their abilities change. Traditional medication management interventions emphasize forming new habits alongside the traditional path of learning to use new technologies. In this study, we navigate designing for older adults with gradual cognitive decline by creating a conversational “check-in” system for routine medication management. We present the design of MATCHA - Medication Action To Check-In for Health Application, informed by exploratory focus groups and design sessions conducted with older adults with MCI and their caregivers, alongside our evaluation based on a two-phased deployment period of 20 weeks. Our results indicate that a conversational “check-in” medication management assistant increased system acceptance while also potentially decreasing the likelihood of accidental over-medication, a common concern for older adults dealing with MCI.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132228360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Guedes, R. Gibson, K. Ellis, Laurianne Sitbon, M. Landoni
People with intellectual disabilities often experience inequalities that affect the standard of their everyday lives. Assistive technologies can help alleviate some of these inequalities, yet abandonment rates remain high. This is in part due to a lack of involvement of all stakeholders in their design and evaluation, thus resulting in outputs that do not meet this cohort’s complex and heterogeneous needs. The aim of this half-day workshop is to focus on community building in a field that is relatively thin and disjointed, thereby enabling researchers to share experiences on how to design for and with people with intellectual disabilities, provide internal support, and establish new collaborations. Workshop outcomes will help to fill a gap in the available guidelines on how to include people with intellectual disabilities in research, through more accessible protocols as well as personalised and better fit-for-purpose technologies.
{"title":"Designing with and for People with Intellectual Disabilities","authors":"L. Guedes, R. Gibson, K. Ellis, Laurianne Sitbon, M. Landoni","doi":"10.1145/3517428.3550406","DOIUrl":"https://doi.org/10.1145/3517428.3550406","url":null,"abstract":"People with intellectual disabilities often experience inequalities that affect the standard of their everyday lives. Assistive technologies can help alleviate some of these inequalities, yet abandonment rates remain high. This is in part due to a lack of involvement of all stakeholders in their design and evaluation, thus resulting in outputs that do not meet this cohort’s complex and heterogeneous needs. The aim of this half-day workshop is to focus on community building in a field that is relatively thin and disjointed, thereby enabling researchers to share experiences on how to design for and with people with intellectual disabilities, provide internal support, and establish new collaborations. Workshop outcomes will help to fill a gap in the available guidelines on how to include people with intellectual disabilities in research, through more accessible protocols as well as personalised and better fit-for-purpose technologies.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"31 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113933709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional symbol-based AAC devices impose meta-linguistic and memory demands on individuals with complex communication needs and hinder conversation partners from stimulating symbolic language in meaningful moments. This work presents a prototype application that generates situation-specific communication boards formed by a combination of descriptive, narrative, and semantic related words and phrases inferred automatically from photographs. Through semi-structured interviews with AAC professionals, we investigate how this prototype was used to support communication and language learning in naturalistic school and therapy settings. We find that the immediacy of vocabulary reduces conversation partners’ workload, opens up opportunities for AAC stimulation, and facilitates symbolic understanding and sentence construction. We contribute a nuanced understanding of how vocabularies generated automatically from photographs can support individuals with complex communication needs in using and learning symbolic AAC, offering insights into the design of automatic vocabulary generation methods and interfaces to better support various scenarios of use and goals.
{"title":"AAC with Automated Vocabulary from Photographs: Insights from School and Speech-Language Therapy Settings","authors":"M. Vargas, Jiamin Dai, Karyn Moffatt","doi":"10.1145/3517428.3544805","DOIUrl":"https://doi.org/10.1145/3517428.3544805","url":null,"abstract":"Traditional symbol-based AAC devices impose meta-linguistic and memory demands on individuals with complex communication needs and hinder conversation partners from stimulating symbolic language in meaningful moments. This work presents a prototype application that generates situation-specific communication boards formed by a combination of descriptive, narrative, and semantic related words and phrases inferred automatically from photographs. Through semi-structured interviews with AAC professionals, we investigate how this prototype was used to support communication and language learning in naturalistic school and therapy settings. We find that the immediacy of vocabulary reduces conversation partners’ workload, opens up opportunities for AAC stimulation, and facilitates symbolic understanding and sentence construction. We contribute a nuanced understanding of how vocabularies generated automatically from photographs can support individuals with complex communication needs in using and learning symbolic AAC, offering insights into the design of automatic vocabulary generation methods and interfaces to better support various scenarios of use and goals.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123770803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Audio description (AD), an additional narration track that conveys visual information in media, improves video accessibility for blind or low vision (BLV) viewers. Despite being the primary beneficiaries of AD, BLV audiences are limited in how they can contribute to the AD writing process due to technology inaccessibility and societal biases. In this poster, we (1) prototype and test AccessibleAD, an accessible AD writing system, (2) analyze what context and features are valued by BLV description writers, and (3) explore nonvisual involvement in AD creation. This work expands on existing literature regarding audio description and explores best practices for expanding access to AD writing.
{"title":"Co-Designing Systems to Support Blind and Low Vision Audio Description Writers","authors":"Lucy Jiang, R. Ladner","doi":"10.1145/3517428.3550394","DOIUrl":"https://doi.org/10.1145/3517428.3550394","url":null,"abstract":"Audio description (AD), an additional narration track that conveys visual information in media, improves video accessibility for blind or low vision (BLV) viewers. Despite being the primary beneficiaries of AD, BLV audiences are limited in how they can contribute to the AD writing process due to technology inaccessibility and societal biases. In this poster, we (1) prototype and test AccessibleAD, an accessible AD writing system, (2) analyze what context and features are valued by BLV description writers, and (3) explore nonvisual involvement in AD creation. This work expands on existing literature regarding audio description and explores best practices for expanding access to AD writing.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126967734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The usage of identity- (e.g., “disabled people”) versus person-first language (e.g., “people with disabilities”) to refer to disabled people has been an active and ongoing discussion. However, it remains unclear which semantic language should be used, especially for different disability categories within the overall demographics of disabled people. To gather and examine the language preferences of disabled people, we surveyed 519 disabled people from 23 countries. Our results show that 49% of disabled people preferred identity-first language whereas 33% preferred person-first language and 18% had no preference. Additionally, we explore the intra-sectionality and intersectionality of disability categories, gender identifications, age groups, and countries on language preferences, finding that language preferences vary within and across each of these factors. Our qualitative assessment of the survey responses shows that disabled people may have multiple or no preferences. To make our survey data publicly available, we created an interactive and accessible live web platform, enabling users to perform intersectional exploration of language preferences. In a secondary investigation, using part-of-speech (POS) tagging, we analyzed the abstracts of 11,536 publications at ACM ASSETS (N=1,564) and ACM CHI (N=9,972), assessing their adoption of identity- and person-first language. We present the results from our analysis and offer recommendations for authors and researchers in choosing the appropriate language to refer to disabled people.
使用身份-(例如,“残疾人”)和以人为本的语言(例如,“残疾人”)来指代残疾人一直是一个积极和持续的讨论。然而,目前还不清楚应该使用哪种语义语言,特别是在残疾人的总体人口统计中,对于不同的残疾类别。为了收集和研究残疾人的语言偏好,我们调查了来自23个国家的519名残疾人。我们的研究结果表明,49%的残疾人更喜欢身份第一语言,33%的人更喜欢以人为本的语言,18%的人没有偏好。此外,我们还探讨了残疾类别、性别认同、年龄组和国家对语言偏好的局部性和交叉性,发现语言偏好在这些因素内部和之间都有所不同。我们对调查结果的定性评估表明,残疾人可能有多个偏好或没有偏好。为了使我们的调查数据公开,我们创建了一个交互式的、可访问的实时网络平台,使用户能够对语言偏好进行交叉探索。在二次调查中,使用词性标注,我们分析了ACM ASSETS (N=1,564)和ACM CHI (N=9,972)上11,536篇出版物的摘要,评估了它们对身份语言和个人第一语言的采用情况。我们提出了我们的分析结果,并为作者和研究人员选择合适的语言来指代残疾人提供了建议。
{"title":"Should I Say “Disabled People” or “People with Disabilities”? Language Preferences of Disabled People Between Identity- and Person-First Language","authors":"Ather Sharif, Aedan Liam McCall, Kianna Roces Bolante","doi":"10.1145/3517428.3544813","DOIUrl":"https://doi.org/10.1145/3517428.3544813","url":null,"abstract":"The usage of identity- (e.g., “disabled people”) versus person-first language (e.g., “people with disabilities”) to refer to disabled people has been an active and ongoing discussion. However, it remains unclear which semantic language should be used, especially for different disability categories within the overall demographics of disabled people. To gather and examine the language preferences of disabled people, we surveyed 519 disabled people from 23 countries. Our results show that 49% of disabled people preferred identity-first language whereas 33% preferred person-first language and 18% had no preference. Additionally, we explore the intra-sectionality and intersectionality of disability categories, gender identifications, age groups, and countries on language preferences, finding that language preferences vary within and across each of these factors. Our qualitative assessment of the survey responses shows that disabled people may have multiple or no preferences. To make our survey data publicly available, we created an interactive and accessible live web platform, enabling users to perform intersectional exploration of language preferences. In a secondary investigation, using part-of-speech (POS) tagging, we analyzed the abstracts of 11,536 publications at ACM ASSETS (N=1,564) and ACM CHI (N=9,972), assessing their adoption of identity- and person-first language. We present the results from our analysis and offer recommendations for authors and researchers in choosing the appropriate language to refer to disabled people.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115949902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}