首页 > 最新文献

Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility最新文献

英文 中文
Evaluating haptic technology in accessibility of digital audio workstations for visual impaired creatives. 评估触觉技术对视障创意人员数字音频工作站的可及性。
Christina Karpodini
This research suggests new ways of making interaction with Digital Audio Workstations more accessible for musicians with visual impairments. Accessible tools such as screen readers are often unable to support users within the music production environment. Haptic technologies have been proposed as solutions but are often generic and do not address the individual’s needs. A series of experiments is being suggested to examine the possibilities of mapping haptic feedback to audio effects parameters. Sequentially, machine learning is being proposed to enable automated mapping and expand access to the individual. The expected results will provide visually impaired musicians with a new way of producing music but also will provide academic research on material and technologies that can be used for future accessibility tools.
这项研究为有视觉障碍的音乐家提供了与数字音频工作站互动的新途径。诸如屏幕阅读器之类的可访问工具通常无法支持音乐制作环境中的用户。触觉技术已被提出作为解决方案,但往往是通用的,不能解决个人的需求。有人建议进行一系列实验,以检验将触觉反馈映射到音频效果参数的可能性。随后,机器学习被提议用于实现自动映射并扩展对个人的访问。预期的结果将为视障音乐家提供一种新的音乐创作方式,也将为未来可用于无障碍工具的材料和技术提供学术研究。
{"title":"Evaluating haptic technology in accessibility of digital audio workstations for visual impaired creatives.","authors":"Christina Karpodini","doi":"10.1145/3517428.3550414","DOIUrl":"https://doi.org/10.1145/3517428.3550414","url":null,"abstract":"This research suggests new ways of making interaction with Digital Audio Workstations more accessible for musicians with visual impairments. Accessible tools such as screen readers are often unable to support users within the music production environment. Haptic technologies have been proposed as solutions but are often generic and do not address the individual’s needs. A series of experiments is being suggested to examine the possibilities of mapping haptic feedback to audio effects parameters. Sequentially, machine learning is being proposed to enable automated mapping and expand access to the individual. The expected results will provide visually impaired musicians with a new way of producing music but also will provide academic research on material and technologies that can be used for future accessibility tools.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116111335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Accessible Smart Kitchen Cupboard 一个可访问的智能厨房橱柜
Marios Gavaletakis, A. Leonidis, N. Stivaktakis, Maria Korozi, Michalis Roulios, C. Stephanidis
Nowadays more than a billion people worldwide experience some form of disability pointing out that accessibility is a major issue that should be taken seriously into consideration. Attempting to make people’s daily habits in the kitchen area easier and more comfortable, we designed an innovative smart accessible cupboard that can identify various information about the products that are placed inside it, such as their type, quantity, location and expiration date. The Smart Kitchen Cupboard is a component of the Intelligent Kitchen aiming to support users in that space by indicating where to find a desired item, assisting in a context-sensitive manner during the cooking process and helping the overall inventory organization. Our immediate plans include planning a full-scale user evaluation in order to get useful feedback about the current design decisions so as to further improve the prototype and integrate more features.
目前,全世界有超过10亿人患有某种形式的残疾,指出无障碍是一个应该认真考虑的重大问题。为了让人们在厨房区域的日常生活习惯更轻松、更舒适,我们设计了一个创新的智能无障碍橱柜,可以识别放置在里面的产品的各种信息,如产品的种类、数量、位置和有效期。智能厨房橱柜是智能厨房的一个组成部分,旨在通过指示在哪里找到所需的物品来支持该空间的用户,在烹饪过程中以上下文敏感的方式提供帮助,并帮助整体库存组织。我们的近期计划包括计划一个全面的用户评估,以获得有关当前设计决策的有用反馈,从而进一步改进原型并集成更多功能。
{"title":"An Accessible Smart Kitchen Cupboard","authors":"Marios Gavaletakis, A. Leonidis, N. Stivaktakis, Maria Korozi, Michalis Roulios, C. Stephanidis","doi":"10.1145/3517428.3550379","DOIUrl":"https://doi.org/10.1145/3517428.3550379","url":null,"abstract":"Nowadays more than a billion people worldwide experience some form of disability pointing out that accessibility is a major issue that should be taken seriously into consideration. Attempting to make people’s daily habits in the kitchen area easier and more comfortable, we designed an innovative smart accessible cupboard that can identify various information about the products that are placed inside it, such as their type, quantity, location and expiration date. The Smart Kitchen Cupboard is a component of the Intelligent Kitchen aiming to support users in that space by indicating where to find a desired item, assisting in a context-sensitive manner during the cooking process and helping the overall inventory organization. Our immediate plans include planning a full-scale user evaluation in order to get useful feedback about the current design decisions so as to further improve the prototype and integrate more features.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123676635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Authoring accessible media content on social networks 在社交网络上创作可访问的媒体内容
Letícia Seixas Pereira, José Coelho, André Rodrigues, João Guerreiro, Tiago Guerreiro, Carlos Duarte
User-generated content plays a key role in social networking, allowing a more active participation, socialisation, and collaboration among users. In particular, media content has been gaining a lot of ground, allowing users to express themselves through different types of formats such as images, GIFs and videos. The majority of this growing type of online visual content remains inaccessible to a part of the population, in particular for those who have a visual disability, despite available tools to mitigate this source of exclusion. We sought to understand how people are perceiving this type of online content in their networks and how support tools are being used. To do so, we conducted a user study, with 258 social network users through an online questionnaire, followed by interviews with 20 of them – 7 blind users and 13 sighted users. Results show how the different approaches being employed by major platforms may not be sufficient to address this issue properly. Our findings reveal that users are not always aware of the possibility and the benefits of adopting accessible practices. From the general perspectives of end-users experiencing accessible practices, concerning barriers encountered, and motivational factors, we also discuss further approaches to create more user engagement and awareness.
用户生成的内容在社交网络中起着关键作用,允许用户之间更积极的参与、社会化和协作。特别是,媒体内容已经取得了很大的进展,允许用户通过不同类型的格式,如图像、gif和视频来表达自己。尽管有一些工具可以减轻这种排斥,但这种不断增长的在线视觉内容的大多数仍然无法被一部分人访问,特别是那些有视觉障碍的人。我们试图了解人们如何在他们的网络中感知这类在线内容,以及如何使用支持工具。为此,我们进行了一项用户研究,通过在线问卷调查对258名社交网络用户进行了调查,随后对其中的20名用户进行了采访——7名盲人用户和13名视力正常的用户。结果表明,主要平台采用的不同方法可能不足以正确解决这个问题。我们的研究结果表明,用户并不总是意识到采用无障碍实践的可能性和好处。从终端用户体验无障碍实践的一般角度出发,考虑到遇到的障碍和动机因素,我们还讨论了进一步创造更多用户参与和意识的方法。
{"title":"Authoring accessible media content on social networks","authors":"Letícia Seixas Pereira, José Coelho, André Rodrigues, João Guerreiro, Tiago Guerreiro, Carlos Duarte","doi":"10.1145/3517428.3544882","DOIUrl":"https://doi.org/10.1145/3517428.3544882","url":null,"abstract":"User-generated content plays a key role in social networking, allowing a more active participation, socialisation, and collaboration among users. In particular, media content has been gaining a lot of ground, allowing users to express themselves through different types of formats such as images, GIFs and videos. The majority of this growing type of online visual content remains inaccessible to a part of the population, in particular for those who have a visual disability, despite available tools to mitigate this source of exclusion. We sought to understand how people are perceiving this type of online content in their networks and how support tools are being used. To do so, we conducted a user study, with 258 social network users through an online questionnaire, followed by interviews with 20 of them – 7 blind users and 13 sighted users. Results show how the different approaches being employed by major platforms may not be sufficient to address this issue properly. Our findings reveal that users are not always aware of the possibility and the benefits of adopting accessible practices. From the general perspectives of end-users experiencing accessible practices, concerning barriers encountered, and motivational factors, we also discuss further approaches to create more user engagement and awareness.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126327244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SoundVizVR: Sound Indicators for Accessible Sounds in Virtual Reality for Deaf or Hard-of-Hearing Users SoundVizVR:在虚拟现实中为聋人或听障用户提供可访问声音的声音指示器
Ziming Li, Shannon Connell, W. Dannels, R. Peiris
Sounds provide vital information such as spatial and interaction cues in virtual reality (VR) applications to convey more immersive experiences to VR users. However, it may be a challenge for deaf or hard-of-hearing (DHH) VR users to access the information given by sounds, which could limit their VR experience. To address this limitation, we present “SoundVizVR”, which explores visualizing sound characteristics and sound types for several types of sounds in VR experience. SoundVizVR uses Sound-Characteristic Indicators to visualize loudness, duration, and location of sound sources in VR and Sound-Type Indicators to present more information about the type of the sound. First, we examined three types of Sound-Characteristic Indicators (On-Object Indicators, Full Mini-Maps and Partial Mini-Maps) and their combinations in a study with 11 DHH participants. We identified that the combination of Full Mini-Map technique and On-Object Indicator was the most preferred visualization and performed best at locating sound sources in VR. Next, we explored presenting more information about the sounds using text and icons as Sound-Type Indicators. A second study with 14 DHH participants found that all Sound-Type Indicator combinations were successful at locating sound sources.
声音在虚拟现实(VR)应用中提供重要的信息,如空间和交互线索,以向VR用户传达更身临其境的体验。然而,对于耳聋或听力障碍(DHH)的VR用户来说,获取声音提供的信息可能是一个挑战,这可能会限制他们的VR体验。为了解决这一限制,我们提出了“SoundVizVR”,它探索了VR体验中几种类型声音的可视化声音特征和声音类型。SoundVizVR使用声音特征指标来可视化VR中的声源的响度,持续时间和位置,并使用声音类型指标来呈现有关声音类型的更多信息。首先,我们在11名DHH参与者的研究中检查了三种类型的声音特征指标(对象指标、完整迷你地图和部分迷你地图)及其组合。我们发现,完整迷你地图技术和On-Object Indicator的结合是最受欢迎的可视化方法,并且在VR中定位声源时表现最好。接下来,我们探索使用文本和图标作为声音类型指示器来呈现更多关于声音的信息。对14名DHH参与者的第二项研究发现,所有声音类型指示器组合都能成功定位声源。
{"title":"SoundVizVR: Sound Indicators for Accessible Sounds in Virtual Reality for Deaf or Hard-of-Hearing Users","authors":"Ziming Li, Shannon Connell, W. Dannels, R. Peiris","doi":"10.1145/3517428.3544817","DOIUrl":"https://doi.org/10.1145/3517428.3544817","url":null,"abstract":"Sounds provide vital information such as spatial and interaction cues in virtual reality (VR) applications to convey more immersive experiences to VR users. However, it may be a challenge for deaf or hard-of-hearing (DHH) VR users to access the information given by sounds, which could limit their VR experience. To address this limitation, we present “SoundVizVR”, which explores visualizing sound characteristics and sound types for several types of sounds in VR experience. SoundVizVR uses Sound-Characteristic Indicators to visualize loudness, duration, and location of sound sources in VR and Sound-Type Indicators to present more information about the type of the sound. First, we examined three types of Sound-Characteristic Indicators (On-Object Indicators, Full Mini-Maps and Partial Mini-Maps) and their combinations in a study with 11 DHH participants. We identified that the combination of Full Mini-Map technique and On-Object Indicator was the most preferred visualization and performed best at locating sound sources in VR. Next, we explored presenting more information about the sounds using text and icons as Sound-Type Indicators. A second study with 14 DHH participants found that all Sound-Type Indicator combinations were successful at locating sound sources.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126530059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Including Accessibility in Computer Science Education 包括计算机科学教育中的可访问性
C. Baker, Yasmine N. El-Hlaly, A. S. Ross, Kristen Shinohara
Accessibility is an important skillset for computing graduates, however it is commonly not included in computing curriculums. The goal of this workshop is to bring together the relevant stakeholders who are interested in adding accessibility into the curriculum (e.g. computing educators, accessibility researchers, and industry professionals) to discuss what exactly we should be teaching regarding accessibility. The format of the workshop works to support two main goals, to provide a consensus on what should be taught by computing educators regarding accessibility and to provide those who have taught accessibility a chance to share and discuss what they have found to be successful. As a part of this workshop, we plan to draft a white paper that discusses the learning objectives and their relative priorities that were derived in the workshop.
可访问性是计算机专业毕业生的一项重要技能,但它通常不包括在计算机课程中。本次研讨会的目标是将有兴趣将可访问性添加到课程中的相关利益相关者(例如,计算教育者,可访问性研究人员和行业专业人员)聚集在一起,讨论关于可访问性我们应该教些什么。研讨会的形式支持两个主要目标:就计算机教育工作者应该教授的可访问性内容达成共识,并为那些教授可访问性的人提供分享和讨论他们发现的成功方法的机会。作为本次研讨会的一部分,我们计划起草一份白皮书,讨论在研讨会中得出的学习目标及其相对优先级。
{"title":"Including Accessibility in Computer Science Education","authors":"C. Baker, Yasmine N. El-Hlaly, A. S. Ross, Kristen Shinohara","doi":"10.1145/3517428.3550404","DOIUrl":"https://doi.org/10.1145/3517428.3550404","url":null,"abstract":"Accessibility is an important skillset for computing graduates, however it is commonly not included in computing curriculums. The goal of this workshop is to bring together the relevant stakeholders who are interested in adding accessibility into the curriculum (e.g. computing educators, accessibility researchers, and industry professionals) to discuss what exactly we should be teaching regarding accessibility. The format of the workshop works to support two main goals, to provide a consensus on what should be taught by computing educators regarding accessibility and to provide those who have taught accessibility a chance to share and discuss what they have found to be successful. As a part of this workshop, we plan to draft a white paper that discusses the learning objectives and their relative priorities that were derived in the workshop.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132968907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Dataset of Alt Texts from HCI Publications: Analyses and Uses Towards Producing More Descriptive Alt Texts of Data Visualizations in Scientific Papers HCI出版物中所有文本的数据集:在科学论文中产生更具描述性的数据可视化文本的分析和使用
S. Chintalapati, Jonathan Bragg, Lucy Lu Wang
Figures in scientific publications contain important information and results, and alt text is needed for blind and low vision readers to engage with their content. We conduct a study to characterize the semantic content of alt text in HCI publications based on a framework introduced by Lundgard and Satyanarayan [30]. Our study focuses on alt text for graphs, charts, and plots extracted from HCI and accessibility publications; we focus on these communities due to the lack of alt text in papers published outside of these disciplines. We find that the capacity of author-written alt text to fulfill blind and low vision user needs is mixed; for example, only 50% of alt texts in our sample contain information about extrema or outliers, and only 31% contain information about major trends or comparisons conveyed by the graph. We release our collected dataset of author-written alt text, and outline possible ways that it can be used to develop tools and models to assist future authors in writing better alt text. Based on our findings, we also discuss recommendations that can be acted upon by publishers and authors to encourage inclusion of more types of semantic content in alt text.
科学出版物中的数字包含重要的信息和结果,所有文本都需要盲人和低视力读者参与其中。我们基于Lundgard和Satyanarayan bbb提出的框架,对HCI出版物中所有文本的语义内容进行了表征研究。我们的研究侧重于从HCI和无障碍出版物中提取的图形、图表和绘图的所有文本;我们之所以关注这些社区,是因为在这些学科之外发表的论文中缺乏Alt文本。我们发现,作者撰写的所有文本满足盲人和低视力用户需求的能力是混合的;例如,在我们的样本中,只有50%的Alt文本包含关于极端值或异常值的信息,只有31%包含关于图表传达的主要趋势或比较的信息。我们发布了我们收集的作者撰写的alt文本数据集,并概述了可能的方法,可以使用它来开发工具和模型,以帮助未来的作者编写更好的alt文本。基于我们的发现,我们还讨论了出版商和作者可以采取的建议,以鼓励在所有文本中包含更多类型的语义内容。
{"title":"A Dataset of Alt Texts from HCI Publications: Analyses and Uses Towards Producing More Descriptive Alt Texts of Data Visualizations in Scientific Papers","authors":"S. Chintalapati, Jonathan Bragg, Lucy Lu Wang","doi":"10.1145/3517428.3544796","DOIUrl":"https://doi.org/10.1145/3517428.3544796","url":null,"abstract":"Figures in scientific publications contain important information and results, and alt text is needed for blind and low vision readers to engage with their content. We conduct a study to characterize the semantic content of alt text in HCI publications based on a framework introduced by Lundgard and Satyanarayan [30]. Our study focuses on alt text for graphs, charts, and plots extracted from HCI and accessibility publications; we focus on these communities due to the lack of alt text in papers published outside of these disciplines. We find that the capacity of author-written alt text to fulfill blind and low vision user needs is mixed; for example, only 50% of alt texts in our sample contain information about extrema or outliers, and only 31% contain information about major trends or comparisons conveyed by the graph. We release our collected dataset of author-written alt text, and outline possible ways that it can be used to develop tools and models to assist future authors in writing better alt text. Based on our findings, we also discuss recommendations that can be acted upon by publishers and authors to encourage inclusion of more types of semantic content in alt text.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"4 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121006712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Uncovering Visually Impaired Gamers’ Preferences for Spatial Awareness Tools Within Video Games 揭示视障玩家对电子游戏中空间意识工具的偏好
Vishnu Nair, Shao-en Ma, Ricardo E. Gonzalez Penuela, Yicheng He, Karen Lin, Mason Hayes, Hannah Huddleston, Matthew Donnelly, Brian A. Smith
Sighted players gain spatial awareness within video games through sight and spatial awareness tools (SATs) such as minimaps. Visually impaired players (VIPs), however, must often rely heavily on SATs to gain spatial awareness, especially in complex environments where using rich ambient sound design alone may be insufficient. Researchers have developed many SATs for facilitating spatial awareness within VIPs. Yet this abundance disguises a gap in our understanding about how exactly these approaches assist VIPs in gaining spatial awareness and what their relative merits and limitations are. To address this, we investigate four leading approaches to facilitating spatial awareness for VIPs within a 3D video game context. Our findings uncover new insights into SATs for VIPs within video games, including that VIPs value position and orientation information the most from an SAT; that none of the approaches we investigated convey position and orientation effectively; and that VIPs highly value the ability to customize SATs.
视力正常的玩家通过视觉和空间感知工具(sat),如小地图,在电子游戏中获得空间意识。然而,视障玩家(vip)必须经常依赖sat来获得空间意识,特别是在复杂的环境中,仅使用丰富的环境声音设计可能是不够的。研究人员已经开发了许多sat来促进vip的空间意识。然而,这种丰富掩盖了我们对这些方法究竟如何帮助vip获得空间意识以及它们的相对优点和局限性的理解上的差距。为了解决这个问题,我们研究了在3D视频游戏环境中促进vip空间意识的四种主要方法。我们的发现揭示了电子游戏中vip的SAT的新见解,包括vip最看重SAT的位置和方向信息;我们所研究的方法都不能有效地传递位置和方向;vip们非常重视定制sat的能力。
{"title":"Uncovering Visually Impaired Gamers’ Preferences for Spatial Awareness Tools Within Video Games","authors":"Vishnu Nair, Shao-en Ma, Ricardo E. Gonzalez Penuela, Yicheng He, Karen Lin, Mason Hayes, Hannah Huddleston, Matthew Donnelly, Brian A. Smith","doi":"10.1145/3517428.3544802","DOIUrl":"https://doi.org/10.1145/3517428.3544802","url":null,"abstract":"Sighted players gain spatial awareness within video games through sight and spatial awareness tools (SATs) such as minimaps. Visually impaired players (VIPs), however, must often rely heavily on SATs to gain spatial awareness, especially in complex environments where using rich ambient sound design alone may be insufficient. Researchers have developed many SATs for facilitating spatial awareness within VIPs. Yet this abundance disguises a gap in our understanding about how exactly these approaches assist VIPs in gaining spatial awareness and what their relative merits and limitations are. To address this, we investigate four leading approaches to facilitating spatial awareness for VIPs within a 3D video game context. Our findings uncover new insights into SATs for VIPs within video games, including that VIPs value position and orientation information the most from an SAT; that none of the approaches we investigated convey position and orientation effectively; and that VIPs highly value the ability to customize SATs.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125794086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
“It’s Just Part of Me:” Understanding Avatar Diversity and Self-presentation of People with Disabilities in Social Virtual Reality “It 's Just Part of Me:”理解社交虚拟现实中残疾人的化身多样性和自我呈现
Kexin Zhang, Elmira Deldari, Zhicong Lu, Yaxing Yao, Yuhang Zhao
In social Virtual Reality (VR), users are embodied in avatars and interact with other users in a face-to-face manner using avatars as the medium. With the advent of social VR, people with disabilities (PWD) have shown an increasing presence on this new social media. With their unique disability identity, it is not clear how PWD perceive their avatars and whether and how they prefer to disclose their disability when presenting themselves in social VR. We fill this gap by exploring PWD’s avatar perception and disability disclosure preferences in social VR. Our study involved two steps. We first conducted a systematic review of fifteen popular social VR applications to evaluate their avatar diversity and accessibility support. We then conducted an in-depth interview study with 19 participants who had different disabilities to understand their avatar experiences. Our research revealed a number of disability disclosure preferences and strategies adopted by PWD (e.g., reflect selective disabilities, present a capable self). We also identified several challenges faced by PWD during their avatar customization process. We discuss the design implications to promote avatar accessibility and diversity for future social VR platforms.
在社交虚拟现实(social Virtual Reality, VR)中,用户化身为虚拟形象,以虚拟形象为媒介与其他用户进行面对面的互动。随着社交虚拟现实的出现,残疾人(PWD)越来越多地出现在这种新的社交媒体上。由于他们独特的残疾身份,目前尚不清楚PWD如何看待他们的化身,以及他们在社交VR中展示自己时是否愿意以及如何披露自己的残疾。我们通过探索残疾人在社交虚拟现实中的化身感知和残疾披露偏好来填补这一空白。我们的研究包括两个步骤。我们首先对15个流行的社交VR应用程序进行了系统回顾,以评估它们的虚拟角色多样性和可访问性支持。然后,我们对19名有不同残疾的参与者进行了深入的访谈研究,以了解他们的化身体验。我们的研究揭示了残疾人士的一些残疾披露偏好和策略(例如,反映选择性残疾,呈现有能力的自我)。我们还确定了PWD在角色定制过程中面临的几个挑战。我们讨论了未来社交虚拟现实平台中促进虚拟化身可访问性和多样性的设计含义。
{"title":"“It’s Just Part of Me:” Understanding Avatar Diversity and Self-presentation of People with Disabilities in Social Virtual Reality","authors":"Kexin Zhang, Elmira Deldari, Zhicong Lu, Yaxing Yao, Yuhang Zhao","doi":"10.1145/3517428.3544829","DOIUrl":"https://doi.org/10.1145/3517428.3544829","url":null,"abstract":"In social Virtual Reality (VR), users are embodied in avatars and interact with other users in a face-to-face manner using avatars as the medium. With the advent of social VR, people with disabilities (PWD) have shown an increasing presence on this new social media. With their unique disability identity, it is not clear how PWD perceive their avatars and whether and how they prefer to disclose their disability when presenting themselves in social VR. We fill this gap by exploring PWD’s avatar perception and disability disclosure preferences in social VR. Our study involved two steps. We first conducted a systematic review of fifteen popular social VR applications to evaluate their avatar diversity and accessibility support. We then conducted an in-depth interview study with 19 participants who had different disabilities to understand their avatar experiences. Our research revealed a number of disability disclosure preferences and strategies adopted by PWD (e.g., reflect selective disabilities, present a capable self). We also identified several challenges faced by PWD during their avatar customization process. We discuss the design implications to promote avatar accessibility and diversity for future social VR platforms.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127138242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
VRBubble: Enhancing Peripheral Awareness of Avatars for People with Visual Impairments in Social Virtual Reality VRBubble:增强视觉障碍人士在社交虚拟现实中虚拟形象的周边感知
Tiger F. Ji, Brianna R. Cochran, Yuhang Zhao
Social Virtual Reality (VR) is growing for remote socialization and collaboration. However, current social VR applications are not accessible to people with visual impairments (PVI) due to their focus on visual experiences. We aim to facilitate social VR accessibility by enhancing PVI’s peripheral awareness of surrounding avatar dynamics. We designed VRBubble, an audio-based VR technique that provides surrounding avatar information based on social distances. Based on Hall’s proxemic theory, VRBubble divides the social space with three Bubbles—Intimate, Conversation, and Social Bubble—generating spatial audio feedback to distinguish avatars in different bubbles and provide suitable avatar information. We provide three audio alternatives: earcons, verbal notifications, and real-world sound effects. PVI can select and combine their preferred feedback alternatives for different avatars, bubbles, and social contexts. We evaluated VRBubble and an audio beacon baseline with 12 PVI in a navigation and a conversation context. We found that VRBubble significantly enhanced participants’ avatar awareness during navigation and enabled avatar identification in both contexts. However, VRBubble was shown to be more distracting in crowded environments.
社交虚拟现实(VR)正在为远程社交和协作而发展。然而,由于目前的社交VR应用主要侧重于视觉体验,因此无法为视障人士使用。我们的目标是通过增强PVI对周围虚拟人物动态的外围感知来促进社交虚拟现实的可访问性。我们设计了VRBubble,这是一种基于音频的虚拟现实技术,可以根据社交距离提供周围化身的信息。基于Hall的邻域理论,VRBubble将社交空间划分为三个气泡——intimate, Conversation, social bubble,生成空间音频反馈,以区分不同气泡中的虚拟人物,提供合适的虚拟人物信息。我们提供了三种音频选择:耳塞、口头通知和真实世界的声音效果。PVI可以为不同的虚拟形象、泡沫和社会背景选择和组合他们喜欢的反馈方案。我们在导航和对话环境中评估了VRBubble和音频信标基线,其中有12个PVI。我们发现VRBubble在导航过程中显著增强了参与者的虚拟形象意识,并在两种情况下实现了虚拟形象识别。然而,VRBubble被证明在拥挤的环境中更容易分散注意力。
{"title":"VRBubble: Enhancing Peripheral Awareness of Avatars for People with Visual Impairments in Social Virtual Reality","authors":"Tiger F. Ji, Brianna R. Cochran, Yuhang Zhao","doi":"10.1145/3517428.3544821","DOIUrl":"https://doi.org/10.1145/3517428.3544821","url":null,"abstract":"Social Virtual Reality (VR) is growing for remote socialization and collaboration. However, current social VR applications are not accessible to people with visual impairments (PVI) due to their focus on visual experiences. We aim to facilitate social VR accessibility by enhancing PVI’s peripheral awareness of surrounding avatar dynamics. We designed VRBubble, an audio-based VR technique that provides surrounding avatar information based on social distances. Based on Hall’s proxemic theory, VRBubble divides the social space with three Bubbles—Intimate, Conversation, and Social Bubble—generating spatial audio feedback to distinguish avatars in different bubbles and provide suitable avatar information. We provide three audio alternatives: earcons, verbal notifications, and real-world sound effects. PVI can select and combine their preferred feedback alternatives for different avatars, bubbles, and social contexts. We evaluated VRBubble and an audio beacon baseline with 12 PVI in a navigation and a conversation context. We found that VRBubble significantly enhanced participants’ avatar awareness during navigation and enabled avatar identification in both contexts. However, VRBubble was shown to be more distracting in crowded environments.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125673574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Towards Visualization of Time–Series Ecological Momentary Assessment (EMA) Data on Standalone Voice–First Virtual Assistants 基于独立语音优先虚拟助手的时间序列生态瞬时评估(EMA)数据可视化研究
Yichen Han, Christopher Bo Han, Chen Chen, Peng Wei Lee, M. Hogarth, A. Moore, Nadir Weibel, E. Farcas
Population aging is an increasingly important consideration for health care in the 21th century, and continuing to have access and interact with digital health information is a key challenge for aging populations. Voice-based Intelligent Virtual Assistants (IVAs) are promising to improve the Quality of Life (QoL) of older adults, and coupled with Ecological Momentary Assessments (EMA) they can be effective to collect important health information from older adults, especially when it comes to repeated time-based events. However, this same EMA data is hard to access for the older adult: although the newest IVAs are equipped with a display, the effectiveness of visualizing time–series based EMA data on standalone IVAs has not been explored. To investigate the potential opportunities for visualizing time–series based EMA data on standalone IVAs, we designed a prototype system, where older adults are able to query and examine the time–series EMA data on Amazon Echo Show — a widely used commercially available standalone screen–based IVA. We conducted a preliminary semi–structured interview with a geriatrician and an older adult, and identified three findings that should be carefully considered when designing such visualizations.
人口老龄化是21世纪医疗保健的一个日益重要的考虑因素,持续访问和交互数字健康信息是老龄化人口面临的一个关键挑战。基于语音的智能虚拟助手(IVAs)有望改善老年人的生活质量(QoL),再加上生态瞬间评估(EMA),它们可以有效地从老年人那里收集重要的健康信息,特别是当涉及到重复的基于时间的事件时。然而,老年人很难获得相同的EMA数据:尽管最新的iva配备了显示器,但尚未探索在独立iva上可视化基于时间序列的EMA数据的有效性。为了研究在独立IVA上可视化基于时间序列的EMA数据的潜在机会,我们设计了一个原型系统,老年人可以在Amazon Echo Show上查询和检查时间序列EMA数据,这是一种广泛使用的基于屏幕的独立IVA。我们对一位老年病专家和一位老年人进行了初步的半结构化访谈,并确定了在设计这种可视化时应仔细考虑的三个发现。
{"title":"Towards Visualization of Time–Series Ecological Momentary Assessment (EMA) Data on Standalone Voice–First Virtual Assistants","authors":"Yichen Han, Christopher Bo Han, Chen Chen, Peng Wei Lee, M. Hogarth, A. Moore, Nadir Weibel, E. Farcas","doi":"10.1145/3517428.3550398","DOIUrl":"https://doi.org/10.1145/3517428.3550398","url":null,"abstract":"Population aging is an increasingly important consideration for health care in the 21th century, and continuing to have access and interact with digital health information is a key challenge for aging populations. Voice-based Intelligent Virtual Assistants (IVAs) are promising to improve the Quality of Life (QoL) of older adults, and coupled with Ecological Momentary Assessments (EMA) they can be effective to collect important health information from older adults, especially when it comes to repeated time-based events. However, this same EMA data is hard to access for the older adult: although the newest IVAs are equipped with a display, the effectiveness of visualizing time–series based EMA data on standalone IVAs has not been explored. To investigate the potential opportunities for visualizing time–series based EMA data on standalone IVAs, we designed a prototype system, where older adults are able to query and examine the time–series EMA data on Amazon Echo Show — a widely used commercially available standalone screen–based IVA. We conducted a preliminary semi–structured interview with a geriatrician and an older adult, and identified three findings that should be carefully considered when designing such visualizations.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121574534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1