This research suggests new ways of making interaction with Digital Audio Workstations more accessible for musicians with visual impairments. Accessible tools such as screen readers are often unable to support users within the music production environment. Haptic technologies have been proposed as solutions but are often generic and do not address the individual’s needs. A series of experiments is being suggested to examine the possibilities of mapping haptic feedback to audio effects parameters. Sequentially, machine learning is being proposed to enable automated mapping and expand access to the individual. The expected results will provide visually impaired musicians with a new way of producing music but also will provide academic research on material and technologies that can be used for future accessibility tools.
{"title":"Evaluating haptic technology in accessibility of digital audio workstations for visual impaired creatives.","authors":"Christina Karpodini","doi":"10.1145/3517428.3550414","DOIUrl":"https://doi.org/10.1145/3517428.3550414","url":null,"abstract":"This research suggests new ways of making interaction with Digital Audio Workstations more accessible for musicians with visual impairments. Accessible tools such as screen readers are often unable to support users within the music production environment. Haptic technologies have been proposed as solutions but are often generic and do not address the individual’s needs. A series of experiments is being suggested to examine the possibilities of mapping haptic feedback to audio effects parameters. Sequentially, machine learning is being proposed to enable automated mapping and expand access to the individual. The expected results will provide visually impaired musicians with a new way of producing music but also will provide academic research on material and technologies that can be used for future accessibility tools.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116111335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marios Gavaletakis, A. Leonidis, N. Stivaktakis, Maria Korozi, Michalis Roulios, C. Stephanidis
Nowadays more than a billion people worldwide experience some form of disability pointing out that accessibility is a major issue that should be taken seriously into consideration. Attempting to make people’s daily habits in the kitchen area easier and more comfortable, we designed an innovative smart accessible cupboard that can identify various information about the products that are placed inside it, such as their type, quantity, location and expiration date. The Smart Kitchen Cupboard is a component of the Intelligent Kitchen aiming to support users in that space by indicating where to find a desired item, assisting in a context-sensitive manner during the cooking process and helping the overall inventory organization. Our immediate plans include planning a full-scale user evaluation in order to get useful feedback about the current design decisions so as to further improve the prototype and integrate more features.
{"title":"An Accessible Smart Kitchen Cupboard","authors":"Marios Gavaletakis, A. Leonidis, N. Stivaktakis, Maria Korozi, Michalis Roulios, C. Stephanidis","doi":"10.1145/3517428.3550379","DOIUrl":"https://doi.org/10.1145/3517428.3550379","url":null,"abstract":"Nowadays more than a billion people worldwide experience some form of disability pointing out that accessibility is a major issue that should be taken seriously into consideration. Attempting to make people’s daily habits in the kitchen area easier and more comfortable, we designed an innovative smart accessible cupboard that can identify various information about the products that are placed inside it, such as their type, quantity, location and expiration date. The Smart Kitchen Cupboard is a component of the Intelligent Kitchen aiming to support users in that space by indicating where to find a desired item, assisting in a context-sensitive manner during the cooking process and helping the overall inventory organization. Our immediate plans include planning a full-scale user evaluation in order to get useful feedback about the current design decisions so as to further improve the prototype and integrate more features.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123676635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Letícia Seixas Pereira, José Coelho, André Rodrigues, João Guerreiro, Tiago Guerreiro, Carlos Duarte
User-generated content plays a key role in social networking, allowing a more active participation, socialisation, and collaboration among users. In particular, media content has been gaining a lot of ground, allowing users to express themselves through different types of formats such as images, GIFs and videos. The majority of this growing type of online visual content remains inaccessible to a part of the population, in particular for those who have a visual disability, despite available tools to mitigate this source of exclusion. We sought to understand how people are perceiving this type of online content in their networks and how support tools are being used. To do so, we conducted a user study, with 258 social network users through an online questionnaire, followed by interviews with 20 of them – 7 blind users and 13 sighted users. Results show how the different approaches being employed by major platforms may not be sufficient to address this issue properly. Our findings reveal that users are not always aware of the possibility and the benefits of adopting accessible practices. From the general perspectives of end-users experiencing accessible practices, concerning barriers encountered, and motivational factors, we also discuss further approaches to create more user engagement and awareness.
{"title":"Authoring accessible media content on social networks","authors":"Letícia Seixas Pereira, José Coelho, André Rodrigues, João Guerreiro, Tiago Guerreiro, Carlos Duarte","doi":"10.1145/3517428.3544882","DOIUrl":"https://doi.org/10.1145/3517428.3544882","url":null,"abstract":"User-generated content plays a key role in social networking, allowing a more active participation, socialisation, and collaboration among users. In particular, media content has been gaining a lot of ground, allowing users to express themselves through different types of formats such as images, GIFs and videos. The majority of this growing type of online visual content remains inaccessible to a part of the population, in particular for those who have a visual disability, despite available tools to mitigate this source of exclusion. We sought to understand how people are perceiving this type of online content in their networks and how support tools are being used. To do so, we conducted a user study, with 258 social network users through an online questionnaire, followed by interviews with 20 of them – 7 blind users and 13 sighted users. Results show how the different approaches being employed by major platforms may not be sufficient to address this issue properly. Our findings reveal that users are not always aware of the possibility and the benefits of adopting accessible practices. From the general perspectives of end-users experiencing accessible practices, concerning barriers encountered, and motivational factors, we also discuss further approaches to create more user engagement and awareness.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126327244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sounds provide vital information such as spatial and interaction cues in virtual reality (VR) applications to convey more immersive experiences to VR users. However, it may be a challenge for deaf or hard-of-hearing (DHH) VR users to access the information given by sounds, which could limit their VR experience. To address this limitation, we present “SoundVizVR”, which explores visualizing sound characteristics and sound types for several types of sounds in VR experience. SoundVizVR uses Sound-Characteristic Indicators to visualize loudness, duration, and location of sound sources in VR and Sound-Type Indicators to present more information about the type of the sound. First, we examined three types of Sound-Characteristic Indicators (On-Object Indicators, Full Mini-Maps and Partial Mini-Maps) and their combinations in a study with 11 DHH participants. We identified that the combination of Full Mini-Map technique and On-Object Indicator was the most preferred visualization and performed best at locating sound sources in VR. Next, we explored presenting more information about the sounds using text and icons as Sound-Type Indicators. A second study with 14 DHH participants found that all Sound-Type Indicator combinations were successful at locating sound sources.
{"title":"SoundVizVR: Sound Indicators for Accessible Sounds in Virtual Reality for Deaf or Hard-of-Hearing Users","authors":"Ziming Li, Shannon Connell, W. Dannels, R. Peiris","doi":"10.1145/3517428.3544817","DOIUrl":"https://doi.org/10.1145/3517428.3544817","url":null,"abstract":"Sounds provide vital information such as spatial and interaction cues in virtual reality (VR) applications to convey more immersive experiences to VR users. However, it may be a challenge for deaf or hard-of-hearing (DHH) VR users to access the information given by sounds, which could limit their VR experience. To address this limitation, we present “SoundVizVR”, which explores visualizing sound characteristics and sound types for several types of sounds in VR experience. SoundVizVR uses Sound-Characteristic Indicators to visualize loudness, duration, and location of sound sources in VR and Sound-Type Indicators to present more information about the type of the sound. First, we examined three types of Sound-Characteristic Indicators (On-Object Indicators, Full Mini-Maps and Partial Mini-Maps) and their combinations in a study with 11 DHH participants. We identified that the combination of Full Mini-Map technique and On-Object Indicator was the most preferred visualization and performed best at locating sound sources in VR. Next, we explored presenting more information about the sounds using text and icons as Sound-Type Indicators. A second study with 14 DHH participants found that all Sound-Type Indicator combinations were successful at locating sound sources.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126530059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Baker, Yasmine N. El-Hlaly, A. S. Ross, Kristen Shinohara
Accessibility is an important skillset for computing graduates, however it is commonly not included in computing curriculums. The goal of this workshop is to bring together the relevant stakeholders who are interested in adding accessibility into the curriculum (e.g. computing educators, accessibility researchers, and industry professionals) to discuss what exactly we should be teaching regarding accessibility. The format of the workshop works to support two main goals, to provide a consensus on what should be taught by computing educators regarding accessibility and to provide those who have taught accessibility a chance to share and discuss what they have found to be successful. As a part of this workshop, we plan to draft a white paper that discusses the learning objectives and their relative priorities that were derived in the workshop.
{"title":"Including Accessibility in Computer Science Education","authors":"C. Baker, Yasmine N. El-Hlaly, A. S. Ross, Kristen Shinohara","doi":"10.1145/3517428.3550404","DOIUrl":"https://doi.org/10.1145/3517428.3550404","url":null,"abstract":"Accessibility is an important skillset for computing graduates, however it is commonly not included in computing curriculums. The goal of this workshop is to bring together the relevant stakeholders who are interested in adding accessibility into the curriculum (e.g. computing educators, accessibility researchers, and industry professionals) to discuss what exactly we should be teaching regarding accessibility. The format of the workshop works to support two main goals, to provide a consensus on what should be taught by computing educators regarding accessibility and to provide those who have taught accessibility a chance to share and discuss what they have found to be successful. As a part of this workshop, we plan to draft a white paper that discusses the learning objectives and their relative priorities that were derived in the workshop.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132968907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Figures in scientific publications contain important information and results, and alt text is needed for blind and low vision readers to engage with their content. We conduct a study to characterize the semantic content of alt text in HCI publications based on a framework introduced by Lundgard and Satyanarayan [30]. Our study focuses on alt text for graphs, charts, and plots extracted from HCI and accessibility publications; we focus on these communities due to the lack of alt text in papers published outside of these disciplines. We find that the capacity of author-written alt text to fulfill blind and low vision user needs is mixed; for example, only 50% of alt texts in our sample contain information about extrema or outliers, and only 31% contain information about major trends or comparisons conveyed by the graph. We release our collected dataset of author-written alt text, and outline possible ways that it can be used to develop tools and models to assist future authors in writing better alt text. Based on our findings, we also discuss recommendations that can be acted upon by publishers and authors to encourage inclusion of more types of semantic content in alt text.
{"title":"A Dataset of Alt Texts from HCI Publications: Analyses and Uses Towards Producing More Descriptive Alt Texts of Data Visualizations in Scientific Papers","authors":"S. Chintalapati, Jonathan Bragg, Lucy Lu Wang","doi":"10.1145/3517428.3544796","DOIUrl":"https://doi.org/10.1145/3517428.3544796","url":null,"abstract":"Figures in scientific publications contain important information and results, and alt text is needed for blind and low vision readers to engage with their content. We conduct a study to characterize the semantic content of alt text in HCI publications based on a framework introduced by Lundgard and Satyanarayan [30]. Our study focuses on alt text for graphs, charts, and plots extracted from HCI and accessibility publications; we focus on these communities due to the lack of alt text in papers published outside of these disciplines. We find that the capacity of author-written alt text to fulfill blind and low vision user needs is mixed; for example, only 50% of alt texts in our sample contain information about extrema or outliers, and only 31% contain information about major trends or comparisons conveyed by the graph. We release our collected dataset of author-written alt text, and outline possible ways that it can be used to develop tools and models to assist future authors in writing better alt text. Based on our findings, we also discuss recommendations that can be acted upon by publishers and authors to encourage inclusion of more types of semantic content in alt text.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"4 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121006712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vishnu Nair, Shao-en Ma, Ricardo E. Gonzalez Penuela, Yicheng He, Karen Lin, Mason Hayes, Hannah Huddleston, Matthew Donnelly, Brian A. Smith
Sighted players gain spatial awareness within video games through sight and spatial awareness tools (SATs) such as minimaps. Visually impaired players (VIPs), however, must often rely heavily on SATs to gain spatial awareness, especially in complex environments where using rich ambient sound design alone may be insufficient. Researchers have developed many SATs for facilitating spatial awareness within VIPs. Yet this abundance disguises a gap in our understanding about how exactly these approaches assist VIPs in gaining spatial awareness and what their relative merits and limitations are. To address this, we investigate four leading approaches to facilitating spatial awareness for VIPs within a 3D video game context. Our findings uncover new insights into SATs for VIPs within video games, including that VIPs value position and orientation information the most from an SAT; that none of the approaches we investigated convey position and orientation effectively; and that VIPs highly value the ability to customize SATs.
{"title":"Uncovering Visually Impaired Gamers’ Preferences for Spatial Awareness Tools Within Video Games","authors":"Vishnu Nair, Shao-en Ma, Ricardo E. Gonzalez Penuela, Yicheng He, Karen Lin, Mason Hayes, Hannah Huddleston, Matthew Donnelly, Brian A. Smith","doi":"10.1145/3517428.3544802","DOIUrl":"https://doi.org/10.1145/3517428.3544802","url":null,"abstract":"Sighted players gain spatial awareness within video games through sight and spatial awareness tools (SATs) such as minimaps. Visually impaired players (VIPs), however, must often rely heavily on SATs to gain spatial awareness, especially in complex environments where using rich ambient sound design alone may be insufficient. Researchers have developed many SATs for facilitating spatial awareness within VIPs. Yet this abundance disguises a gap in our understanding about how exactly these approaches assist VIPs in gaining spatial awareness and what their relative merits and limitations are. To address this, we investigate four leading approaches to facilitating spatial awareness for VIPs within a 3D video game context. Our findings uncover new insights into SATs for VIPs within video games, including that VIPs value position and orientation information the most from an SAT; that none of the approaches we investigated convey position and orientation effectively; and that VIPs highly value the ability to customize SATs.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125794086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In social Virtual Reality (VR), users are embodied in avatars and interact with other users in a face-to-face manner using avatars as the medium. With the advent of social VR, people with disabilities (PWD) have shown an increasing presence on this new social media. With their unique disability identity, it is not clear how PWD perceive their avatars and whether and how they prefer to disclose their disability when presenting themselves in social VR. We fill this gap by exploring PWD’s avatar perception and disability disclosure preferences in social VR. Our study involved two steps. We first conducted a systematic review of fifteen popular social VR applications to evaluate their avatar diversity and accessibility support. We then conducted an in-depth interview study with 19 participants who had different disabilities to understand their avatar experiences. Our research revealed a number of disability disclosure preferences and strategies adopted by PWD (e.g., reflect selective disabilities, present a capable self). We also identified several challenges faced by PWD during their avatar customization process. We discuss the design implications to promote avatar accessibility and diversity for future social VR platforms.
{"title":"“It’s Just Part of Me:” Understanding Avatar Diversity and Self-presentation of People with Disabilities in Social Virtual Reality","authors":"Kexin Zhang, Elmira Deldari, Zhicong Lu, Yaxing Yao, Yuhang Zhao","doi":"10.1145/3517428.3544829","DOIUrl":"https://doi.org/10.1145/3517428.3544829","url":null,"abstract":"In social Virtual Reality (VR), users are embodied in avatars and interact with other users in a face-to-face manner using avatars as the medium. With the advent of social VR, people with disabilities (PWD) have shown an increasing presence on this new social media. With their unique disability identity, it is not clear how PWD perceive their avatars and whether and how they prefer to disclose their disability when presenting themselves in social VR. We fill this gap by exploring PWD’s avatar perception and disability disclosure preferences in social VR. Our study involved two steps. We first conducted a systematic review of fifteen popular social VR applications to evaluate their avatar diversity and accessibility support. We then conducted an in-depth interview study with 19 participants who had different disabilities to understand their avatar experiences. Our research revealed a number of disability disclosure preferences and strategies adopted by PWD (e.g., reflect selective disabilities, present a capable self). We also identified several challenges faced by PWD during their avatar customization process. We discuss the design implications to promote avatar accessibility and diversity for future social VR platforms.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127138242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social Virtual Reality (VR) is growing for remote socialization and collaboration. However, current social VR applications are not accessible to people with visual impairments (PVI) due to their focus on visual experiences. We aim to facilitate social VR accessibility by enhancing PVI’s peripheral awareness of surrounding avatar dynamics. We designed VRBubble, an audio-based VR technique that provides surrounding avatar information based on social distances. Based on Hall’s proxemic theory, VRBubble divides the social space with three Bubbles—Intimate, Conversation, and Social Bubble—generating spatial audio feedback to distinguish avatars in different bubbles and provide suitable avatar information. We provide three audio alternatives: earcons, verbal notifications, and real-world sound effects. PVI can select and combine their preferred feedback alternatives for different avatars, bubbles, and social contexts. We evaluated VRBubble and an audio beacon baseline with 12 PVI in a navigation and a conversation context. We found that VRBubble significantly enhanced participants’ avatar awareness during navigation and enabled avatar identification in both contexts. However, VRBubble was shown to be more distracting in crowded environments.
社交虚拟现实(VR)正在为远程社交和协作而发展。然而,由于目前的社交VR应用主要侧重于视觉体验,因此无法为视障人士使用。我们的目标是通过增强PVI对周围虚拟人物动态的外围感知来促进社交虚拟现实的可访问性。我们设计了VRBubble,这是一种基于音频的虚拟现实技术,可以根据社交距离提供周围化身的信息。基于Hall的邻域理论,VRBubble将社交空间划分为三个气泡——intimate, Conversation, social bubble,生成空间音频反馈,以区分不同气泡中的虚拟人物,提供合适的虚拟人物信息。我们提供了三种音频选择:耳塞、口头通知和真实世界的声音效果。PVI可以为不同的虚拟形象、泡沫和社会背景选择和组合他们喜欢的反馈方案。我们在导航和对话环境中评估了VRBubble和音频信标基线,其中有12个PVI。我们发现VRBubble在导航过程中显著增强了参与者的虚拟形象意识,并在两种情况下实现了虚拟形象识别。然而,VRBubble被证明在拥挤的环境中更容易分散注意力。
{"title":"VRBubble: Enhancing Peripheral Awareness of Avatars for People with Visual Impairments in Social Virtual Reality","authors":"Tiger F. Ji, Brianna R. Cochran, Yuhang Zhao","doi":"10.1145/3517428.3544821","DOIUrl":"https://doi.org/10.1145/3517428.3544821","url":null,"abstract":"Social Virtual Reality (VR) is growing for remote socialization and collaboration. However, current social VR applications are not accessible to people with visual impairments (PVI) due to their focus on visual experiences. We aim to facilitate social VR accessibility by enhancing PVI’s peripheral awareness of surrounding avatar dynamics. We designed VRBubble, an audio-based VR technique that provides surrounding avatar information based on social distances. Based on Hall’s proxemic theory, VRBubble divides the social space with three Bubbles—Intimate, Conversation, and Social Bubble—generating spatial audio feedback to distinguish avatars in different bubbles and provide suitable avatar information. We provide three audio alternatives: earcons, verbal notifications, and real-world sound effects. PVI can select and combine their preferred feedback alternatives for different avatars, bubbles, and social contexts. We evaluated VRBubble and an audio beacon baseline with 12 PVI in a navigation and a conversation context. We found that VRBubble significantly enhanced participants’ avatar awareness during navigation and enabled avatar identification in both contexts. However, VRBubble was shown to be more distracting in crowded environments.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125673574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yichen Han, Christopher Bo Han, Chen Chen, Peng Wei Lee, M. Hogarth, A. Moore, Nadir Weibel, E. Farcas
Population aging is an increasingly important consideration for health care in the 21th century, and continuing to have access and interact with digital health information is a key challenge for aging populations. Voice-based Intelligent Virtual Assistants (IVAs) are promising to improve the Quality of Life (QoL) of older adults, and coupled with Ecological Momentary Assessments (EMA) they can be effective to collect important health information from older adults, especially when it comes to repeated time-based events. However, this same EMA data is hard to access for the older adult: although the newest IVAs are equipped with a display, the effectiveness of visualizing time–series based EMA data on standalone IVAs has not been explored. To investigate the potential opportunities for visualizing time–series based EMA data on standalone IVAs, we designed a prototype system, where older adults are able to query and examine the time–series EMA data on Amazon Echo Show — a widely used commercially available standalone screen–based IVA. We conducted a preliminary semi–structured interview with a geriatrician and an older adult, and identified three findings that should be carefully considered when designing such visualizations.
{"title":"Towards Visualization of Time–Series Ecological Momentary Assessment (EMA) Data on Standalone Voice–First Virtual Assistants","authors":"Yichen Han, Christopher Bo Han, Chen Chen, Peng Wei Lee, M. Hogarth, A. Moore, Nadir Weibel, E. Farcas","doi":"10.1145/3517428.3550398","DOIUrl":"https://doi.org/10.1145/3517428.3550398","url":null,"abstract":"Population aging is an increasingly important consideration for health care in the 21th century, and continuing to have access and interact with digital health information is a key challenge for aging populations. Voice-based Intelligent Virtual Assistants (IVAs) are promising to improve the Quality of Life (QoL) of older adults, and coupled with Ecological Momentary Assessments (EMA) they can be effective to collect important health information from older adults, especially when it comes to repeated time-based events. However, this same EMA data is hard to access for the older adult: although the newest IVAs are equipped with a display, the effectiveness of visualizing time–series based EMA data on standalone IVAs has not been explored. To investigate the potential opportunities for visualizing time–series based EMA data on standalone IVAs, we designed a prototype system, where older adults are able to query and examine the time–series EMA data on Amazon Echo Show — a widely used commercially available standalone screen–based IVA. We conducted a preliminary semi–structured interview with a geriatrician and an older adult, and identified three findings that should be carefully considered when designing such visualizations.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121574534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}