While voice user interfaces (VUIs) are becoming promising tools for users to control home appliances, the VUIs pose great challenges to the older adults. In this paper, I explored how to use multimedia interactions (i.e., voice+visual output) to enhance the VUI for older adults to better control home appliances. I conducted two preliminary studies with six older adults to understand the current usability problems. Given the results, I designed a smart screen prototype with visual-enhanced VUI. The evaluation showed that the participants were highly receptive to the prototype and considered its features effective and accessible.
{"title":"CareHub: Smart Screen VUI and Home Appliances Control for Older Adults","authors":"Ningjing Sun","doi":"10.1145/3373625.3418051","DOIUrl":"https://doi.org/10.1145/3373625.3418051","url":null,"abstract":"While voice user interfaces (VUIs) are becoming promising tools for users to control home appliances, the VUIs pose great challenges to the older adults. In this paper, I explored how to use multimedia interactions (i.e., voice+visual output) to enhance the VUI for older adults to better control home appliances. I conducted two preliminary studies with six older adults to understand the current usability problems. Given the results, I designed a smart screen prototype with visual-enhanced VUI. The evaluation showed that the participants were highly receptive to the prototype and considered its features effective and accessible.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"474 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125443679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaban Zulfiqar, Safa Arooj, Umar Hayat, S. Shahid, Asim Karim
LaTeX is widely used in STEM fields for creating high-quality documents that are converted to the Portable Document Format (PDF) for dissemination. Currently, available LaTeX systems do not guarantee that the generated PDFs are compliant with international accessibility standards. In this work, we present AGAP (Automated Generation of Accessible PDF) that automates and makes accessible the process of generating accessible PDFs from LaTeX. AGAP flags accessibility violations and provides guidance on how to fix them at compile time. AGAP allows interaction through speech synthesis and keyboard shortcuts thus making it fully accessible to persons with vision impairments (PVIs). Evaluating the accessible PDF generated using AGAP with a standard accessibility checker resulted in a much smaller number of violations as opposed to the PDF generated using another desktop LaTeX editor.
{"title":"Automated Generation of Accessible PDF","authors":"Shaban Zulfiqar, Safa Arooj, Umar Hayat, S. Shahid, Asim Karim","doi":"10.1145/3373625.3418045","DOIUrl":"https://doi.org/10.1145/3373625.3418045","url":null,"abstract":"LaTeX is widely used in STEM fields for creating high-quality documents that are converted to the Portable Document Format (PDF) for dissemination. Currently, available LaTeX systems do not guarantee that the generated PDFs are compliant with international accessibility standards. In this work, we present AGAP (Automated Generation of Accessible PDF) that automates and makes accessible the process of generating accessible PDFs from LaTeX. AGAP flags accessibility violations and provides guidance on how to fix them at compile time. AGAP allows interaction through speech synthesis and keyboard shortcuts thus making it fully accessible to persons with vision impairments (PVIs). Evaluating the accessible PDF generated using AGAP with a standard accessibility checker resulted in a much smaller number of violations as opposed to the PDF generated using another desktop LaTeX editor.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121110092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many computational toolkits to promote early learning of basic computational concepts and practices are inaccessible to learners with reduced visual abilities. We report on the design of TIP-Toy, a tactile and inclusive open-source toolkit, to allow children with different visual abilities to learn about computational topics through music by combining a series of physical blocks. TIP-Toy was developed through two design consultations with experts and potential users. The first round of consultations was conducted with 3 visually impaired adults with significant programming experience; the second one involved 9 children with mixed visual abilities. Through these design consultations we collected feedback on TIP-Toy, and observed children's interactions with the toolkit. We discuss appropriate features for future iterations of TIP-toy to maximise the opportunities for accessible and enjoyable learning experiences.
{"title":"TIP-Toy: a tactile, open-source computational toolkit to support learning across visual abilities","authors":"G. Barbareschi, Enrico Costanza, C. Holloway","doi":"10.1145/3373625.3417005","DOIUrl":"https://doi.org/10.1145/3373625.3417005","url":null,"abstract":"Many computational toolkits to promote early learning of basic computational concepts and practices are inaccessible to learners with reduced visual abilities. We report on the design of TIP-Toy, a tactile and inclusive open-source toolkit, to allow children with different visual abilities to learn about computational topics through music by combining a series of physical blocks. TIP-Toy was developed through two design consultations with experts and potential users. The first round of consultations was conducted with 3 visually impaired adults with significant programming experience; the second one involved 9 children with mixed visual abilities. Through these design consultations we collected feedback on TIP-Toy, and observed children's interactions with the toolkit. We discuss appropriate features for future iterations of TIP-toy to maximise the opportunities for accessible and enjoyable learning experiences.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121195602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danyang Fan, Alexa Fay Siu, Sile O'Modhrain, Sean Follmer
As data visualization has become increasingly important in our society, many challenges prevent people who are blind and visually impaired (BVI) from fully engaging with data and data graphics. For example, tactile data representations are commonly used by BVI people to explore spatial graphics, but it is difficult for BVI people to construct and understand tactile representations without prior training or expert assistance. In this work, we adopt a constructive visualization framework of using simple and versatile tokens to engage non-experts in the construction of tactile data representations. We present preliminary results of how participants chose to interpret and create tactile data representations and the preferred haptic exploratory procedures used for retrieving information. All participants used similar construction strategies and converged upon 3D compact spatial forms to retrieve and display analytical information. These insights can inform future data visualization authoring and consumption tools that users of more diverse skill backgrounds can effectively navigate.
{"title":"Constructive Visualization to Inform the Design and Exploration of Tactile Data Representations","authors":"Danyang Fan, Alexa Fay Siu, Sile O'Modhrain, Sean Follmer","doi":"10.1145/3373625.3418027","DOIUrl":"https://doi.org/10.1145/3373625.3418027","url":null,"abstract":"As data visualization has become increasingly important in our society, many challenges prevent people who are blind and visually impaired (BVI) from fully engaging with data and data graphics. For example, tactile data representations are commonly used by BVI people to explore spatial graphics, but it is difficult for BVI people to construct and understand tactile representations without prior training or expert assistance. In this work, we adopt a constructive visualization framework of using simple and versatile tokens to engage non-experts in the construction of tactile data representations. We present preliminary results of how participants chose to interpret and create tactile data representations and the preferred haptic exploratory procedures used for retrieving information. All participants used similar construction strategies and converged upon 3D compact spatial forms to retrieve and display analytical information. These insights can inform future data visualization authoring and consumption tools that users of more diverse skill backgrounds can effectively navigate.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128034364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ather Sharif, Victoria Pao, Katharina Reinecke, J. Wobbrock
For over six decades, Fitts’s law (1954) has been utilized by researchers to quantify human pointing performance in terms of “throughput,” a combined speed-accuracy measure of aimed movement efficiency. Throughput measurements are commonly used to evaluate pointing techniques and devices, helping to inform software and hardware developments. Although Fitts’s law has been used extensively in HCI and beyond, its test-retest reliability, both in terms of throughput and model fit, from one session to the next, is still unexplored. Additionally, despite the fact that prior work has shown that Fitts’s law provides good model fits, with Pearson correlation coefficients commonly at r=.90 or above, the model fitness of Fitts’s law has not been thoroughly investigated for people who exhibit limited fine motor function in their dominant hand. To fill these gaps, we conducted a study with 21 participants with limited fine motor function and 34 participants without such limitations. Each participant performed a classic reciprocal pointing task comprising vertical ribbons in a 1-D layout in two sessions, which were at least four hours and at most 48 hours apart. Our findings indicate that the throughput values between the two sessions were statistically significantly different, both for people with and without limited fine motor function, suggesting that Fitts’s law provides low test-retest reliability. Importantly, the test-retest reliability of Fitts’s throughput metric was 4.7% lower for people with limited fine motor function. Additionally, we found that the model fitness of Fitts’s law as measured by Pearson correlation coefficient, r, was .89 (SD=0.08) for people without limited fine motor function, and .81 (SD=0.09) for people with limited fine motor function. Taken together, these results indicate that Fitts’s law should be used with caution and, if possible, over multiple sessions, especially when used in assistive technology evaluations.
{"title":"The Reliability of Fitts’s Law as a Movement Model for People with and without Limited Fine Motor Function","authors":"Ather Sharif, Victoria Pao, Katharina Reinecke, J. Wobbrock","doi":"10.1145/3373625.3416999","DOIUrl":"https://doi.org/10.1145/3373625.3416999","url":null,"abstract":"For over six decades, Fitts’s law (1954) has been utilized by researchers to quantify human pointing performance in terms of “throughput,” a combined speed-accuracy measure of aimed movement efficiency. Throughput measurements are commonly used to evaluate pointing techniques and devices, helping to inform software and hardware developments. Although Fitts’s law has been used extensively in HCI and beyond, its test-retest reliability, both in terms of throughput and model fit, from one session to the next, is still unexplored. Additionally, despite the fact that prior work has shown that Fitts’s law provides good model fits, with Pearson correlation coefficients commonly at r=.90 or above, the model fitness of Fitts’s law has not been thoroughly investigated for people who exhibit limited fine motor function in their dominant hand. To fill these gaps, we conducted a study with 21 participants with limited fine motor function and 34 participants without such limitations. Each participant performed a classic reciprocal pointing task comprising vertical ribbons in a 1-D layout in two sessions, which were at least four hours and at most 48 hours apart. Our findings indicate that the throughput values between the two sessions were statistically significantly different, both for people with and without limited fine motor function, suggesting that Fitts’s law provides low test-retest reliability. Importantly, the test-retest reliability of Fitts’s throughput metric was 4.7% lower for people with limited fine motor function. Additionally, we found that the model fitness of Fitts’s law as measured by Pearson correlation coefficient, r, was .89 (SD=0.08) for people without limited fine motor function, and .81 (SD=0.09) for people with limited fine motor function. Taken together, these results indicate that Fitts’s law should be used with caution and, if possible, over multiple sessions, especially when used in assistive technology evaluations.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114135424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With an increase in the use of block-based programming environments in k-12 curriculum, the need for accessibility exists in order to serve all students. Accessible block-based systems are in their infancy. Such systems would provide students with visual impairments the opportunity to learn programming and take part in computational thinking activities using the same systems that are found appealing to most sighted learners. However, with the presence of these systems little is known about their long-term use in the educational milieu. As a result, we conducted a survey with twelve teachers of students with visual impairments to learn about the use of these systems in teaching their students and to understand the barriers that students face in the learning process. Our study reveals that only one block-based programming environment is common among teachers and that several challenges exist. These challenges range from limited learners’ preparedness through difficulties editing and navigating code, to ineffective system feedback.
{"title":"Investigating Challenges Faced by Learners with Visual Impairments using Block-Based Programming/Hybrid Environments","authors":"Aboubakar Mountapmbeme, S. Ludi","doi":"10.1145/3373625.3417998","DOIUrl":"https://doi.org/10.1145/3373625.3417998","url":null,"abstract":"With an increase in the use of block-based programming environments in k-12 curriculum, the need for accessibility exists in order to serve all students. Accessible block-based systems are in their infancy. Such systems would provide students with visual impairments the opportunity to learn programming and take part in computational thinking activities using the same systems that are found appealing to most sighted learners. However, with the presence of these systems little is known about their long-term use in the educational milieu. As a result, we conducted a survey with twelve teachers of students with visual impairments to learn about the use of these systems in teaching their students and to understand the barriers that students face in the learning process. Our study reveals that only one block-based programming environment is common among teachers and that several challenges exist. These challenges range from limited learners’ preparedness through difficulties editing and navigating code, to ineffective system feedback.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"94 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126030335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. Wen, Erica Silverstein, Yuhang Zhao, Anjelika Lynne S. Amog, K. Garnett, Shiri Azenkot
Many students with specific learning disabilities (SLDs) have difficulty learning math. To succeed in math, they need to receive personalized support from teachers. Recently, math e-learning tools that provide personalized math skills training have gained popularity. However, we know little about how well these tools help teachers personalize instruction for students with SLDs. To answer this question, we conducted semi-structured interviews with 12 teachers who taught students with SLDs in grades five to eight. We found that participants used math e-learning tools that were not designed specifically for students with SLDs. Participants had difficulty using these tools because of text-intensive user interfaces, insufficient feedback about student performance, inability to adjust difficulty levels, and problems with setup and maintenance. Participants also needed assistive technology for their students, but they had challenges in getting and using it. From our findings, we distilled design implications to help shape the design of more inclusive and effective e-learning tools.
{"title":"Teacher Views of Math E-learning Tools for Students with Specific Learning Disabilities","authors":"Z. Wen, Erica Silverstein, Yuhang Zhao, Anjelika Lynne S. Amog, K. Garnett, Shiri Azenkot","doi":"10.1145/3373625.3417029","DOIUrl":"https://doi.org/10.1145/3373625.3417029","url":null,"abstract":"Many students with specific learning disabilities (SLDs) have difficulty learning math. To succeed in math, they need to receive personalized support from teachers. Recently, math e-learning tools that provide personalized math skills training have gained popularity. However, we know little about how well these tools help teachers personalize instruction for students with SLDs. To answer this question, we conducted semi-structured interviews with 12 teachers who taught students with SLDs in grades five to eight. We found that participants used math e-learning tools that were not designed specifically for students with SLDs. Participants had difficulty using these tools because of text-intensive user interfaces, insufficient feedback about student performance, inability to adjust difficulty levels, and problems with setup and maintenance. Participants also needed assistive technology for their students, but they had challenges in getting and using it. From our findings, we distilled design implications to help shape the design of more inclusive and effective e-learning tools.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129965131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ahmetovic, C. Bettini, M. Ciucci, F. Dacarro, P. Dubini, A. Gotti, Gerard O'Reilly, A. Marino, S. Mascetti, D. Sarigiannis
This paper reports our ongoing effort in the development of the ROSSINI system, which looks to address emergency situations in industrial plants. The user interaction design of ROSSINI described in this paper takes into account the fact that the user can be subject to situational impairment (e.g., limited sight due to smoke in the environment). As such, it is envisioned that existing solutions designed for people with disabilities can be adopted and extended for this purpose.
{"title":"Emergency navigation assistance for industrial plants workers subject to situational impairment","authors":"D. Ahmetovic, C. Bettini, M. Ciucci, F. Dacarro, P. Dubini, A. Gotti, Gerard O'Reilly, A. Marino, S. Mascetti, D. Sarigiannis","doi":"10.1145/3373625.3418016","DOIUrl":"https://doi.org/10.1145/3373625.3418016","url":null,"abstract":"This paper reports our ongoing effort in the development of the ROSSINI system, which looks to address emergency situations in industrial plants. The user interaction design of ROSSINI described in this paper takes into account the fact that the user can be subject to situational impairment (e.g., limited sight due to smoke in the environment). As such, it is envisioned that existing solutions designed for people with disabilities can be adopted and extended for this purpose.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127143905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Loizides, Sara H. Basson, D. Kanevsky, Olga Prilepova, Sagar Savla, Susanna Zaraysky
In this paper, we explore non-traditional, serendipitous uses of an automatic speech recognition (ASR) application called Live Transcribe. Through these, we are able to identify interaction use cases for developing further technology to enhance the communication capabilities of deaf and hard of hearing people.
{"title":"Breaking Boundaries with Live Transcribe: Expanding Use Cases Beyond Standard Captioning Scenarios","authors":"F. Loizides, Sara H. Basson, D. Kanevsky, Olga Prilepova, Sagar Savla, Susanna Zaraysky","doi":"10.1145/3373625.3417300","DOIUrl":"https://doi.org/10.1145/3373625.3417300","url":null,"abstract":"In this paper, we explore non-traditional, serendipitous uses of an automatic speech recognition (ASR) application called Live Transcribe. Through these, we are able to identify interaction use cases for developing further technology to enhance the communication capabilities of deaf and hard of hearing people.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124092014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elyse D. Z. Chase, A. Siu, Abena Boadi-Agyemang, Gene S.-H. Kim, Eric J. Gonzalez, Sean Follmer
The ability to effectively read and interpret tactile graphics and charts is an essential part of a tactile learner’s path to literacy, but is a skill that requires instruction and training. Many teachers of the visual impaired (TVIs) report that blind and visually impaired students have trouble interpreting graphics independently without individual instruction. We present PantoGuide, a low-cost system that provides audio and haptic guidance, via skin-stretch feedback to the dorsum of a user’s hand while the user explores a tactile graphic overlaid on a touchscreen. This system allows programming of haptic guidance patterns and cues for tactile graphics that can be experienced by students learning remotely or that can be reviewed by a student independently. We propose two teaching scenarios (synchronous and asynchronous) and two guidance interactions (point-to-point and continuous) that the device can support – and demonstrate their use in a set of applications we co-design with one co-author who is blind and a tactile graphics user.
{"title":"PantoGuide: A Haptic and Audio Guidance System To Support Tactile Graphics Exploration","authors":"Elyse D. Z. Chase, A. Siu, Abena Boadi-Agyemang, Gene S.-H. Kim, Eric J. Gonzalez, Sean Follmer","doi":"10.1145/3373625.3418023","DOIUrl":"https://doi.org/10.1145/3373625.3418023","url":null,"abstract":"The ability to effectively read and interpret tactile graphics and charts is an essential part of a tactile learner’s path to literacy, but is a skill that requires instruction and training. Many teachers of the visual impaired (TVIs) report that blind and visually impaired students have trouble interpreting graphics independently without individual instruction. We present PantoGuide, a low-cost system that provides audio and haptic guidance, via skin-stretch feedback to the dorsum of a user’s hand while the user explores a tactile graphic overlaid on a touchscreen. This system allows programming of haptic guidance patterns and cues for tactile graphics that can be experienced by students learning remotely or that can be reviewed by a student independently. We propose two teaching scenarios (synchronous and asynchronous) and two guidance interactions (point-to-point and continuous) that the device can support – and demonstrate their use in a set of applications we co-design with one co-author who is blind and a tactile graphics user.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126577339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}