Manohar Swaminathan, Sujeath Pareddy, T. Sawant, Shubi Agarwal
Mainstream video games are predominantly inaccessible to people with visual impairments (VIPs). We present ongoing research that aims to make such games go beyond accessibility, by making them engaging and enjoyable for visually impaired players. We have built a new interaction toolkit called the Responsive Spatial Audio Cloud (ReSAC), developed around spatial audio technology, to enable visually impaired players to play video games. VIPs successfully finished a simple video game integrated with ReSAC and reported enjoying the experience.
{"title":"Video Gaming for the Vision Impaired","authors":"Manohar Swaminathan, Sujeath Pareddy, T. Sawant, Shubi Agarwal","doi":"10.1145/3234695.3241025","DOIUrl":"https://doi.org/10.1145/3234695.3241025","url":null,"abstract":"Mainstream video games are predominantly inaccessible to people with visual impairments (VIPs). We present ongoing research that aims to make such games go beyond accessibility, by making them engaging and enjoyable for visually impaired players. We have built a new interaction toolkit called the Responsive Spatial Audio Cloud (ReSAC), developed around spatial audio technology, to enable visually impaired players to play video games. VIPs successfully finished a simple video game integrated with ReSAC and reported enjoying the experience.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134011513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Personal technologies are rarely designed to be accessible to disabled people, partly due to the perceived challenge of including disability in design. Through design workshops, we addressed this challenge by infusing user-centered design activities with Design for Social Accessibility-a perspective emphasizing social aspects of accessibility-to investigate how professional designers can leverage social factors to include accessibility in design. We focused on how professional designers incorporated Design for Social Accessibility's three tenets: (1) to work with users with and without visual impairments; (2) to consider social and functional factors; (3) to employ tools-a framework and method cards-to raise awareness and prompt reflection on social aspects toward accessible design. We then interviewed designers about their workshop experiences. We found DSA to be an effective set of tools and strategies incorporating social/functional and non/disabled perspectives that helped designers create accessible design.
个人技术很少被设计为残疾人可以使用,部分原因是在设计中考虑到残疾的挑战。通过设计研讨会,我们通过将以用户为中心的设计活动与社会无障碍设计(强调可访问性的社会方面)相结合来解决这一挑战,研究专业设计师如何利用社会因素在设计中包括可访问性。我们关注的是专业设计师如何将Design for Social Accessibility的三个原则结合起来:(1)与有或没有视觉障碍的用户一起工作;(2)考虑社会和功能因素;(3)使用工具——框架和方法卡——来提高人们对无障碍设计的认识,并促使人们对社会方面进行反思。然后我们采访了设计师们关于他们的工作坊经历。我们发现DSA是一套有效的工具和策略,结合了社会/功能和非/残疾的视角,帮助设计师创建无障碍设计。
{"title":"Incorporating Social Factors in Accessible Design","authors":"Kristen Shinohara, J. Wobbrock, W. Pratt","doi":"10.1145/3234695.3236346","DOIUrl":"https://doi.org/10.1145/3234695.3236346","url":null,"abstract":"Personal technologies are rarely designed to be accessible to disabled people, partly due to the perceived challenge of including disability in design. Through design workshops, we addressed this challenge by infusing user-centered design activities with Design for Social Accessibility-a perspective emphasizing social aspects of accessibility-to investigate how professional designers can leverage social factors to include accessibility in design. We focused on how professional designers incorporated Design for Social Accessibility's three tenets: (1) to work with users with and without visual impairments; (2) to consider social and functional factors; (3) to employ tools-a framework and method cards-to raise awareness and prompt reflection on social aspects toward accessible design. We then interviewed designers about their workshop experiences. We found DSA to be an effective set of tools and strategies incorporating social/functional and non/disabled perspectives that helped designers create accessible design.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131275898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sign languages lack a standard written form, preventing millions of Deaf people from accessing text in their primary language. A major barrier to adoption is difficulty learning a system which represents complex 3D movements with stationary symbols. In this work, we leverage the animation capabilities of modern screens to create the first animated character system prototype for sign language, producing text that combines iconic symbols and movement. Using animation to represent sign movements can increase resemblance to the live language, making the character system easier to learn. We explore this idea through the lens of American Sign Language (ASL), presenting 1) a pilot study underscoring the potential value of an animated ASL character system, 2) a structured approach for designing animations for an existing ASL character system, and 3) a design probe workshop with ASL users eliciting guidelines for the animated character system design.
{"title":"Designing an Animated Character System for American Sign Language","authors":"Danielle Bragg, R. Kushalnagar, R. Ladner","doi":"10.1145/3234695.3236338","DOIUrl":"https://doi.org/10.1145/3234695.3236338","url":null,"abstract":"Sign languages lack a standard written form, preventing millions of Deaf people from accessing text in their primary language. A major barrier to adoption is difficulty learning a system which represents complex 3D movements with stationary symbols. In this work, we leverage the animation capabilities of modern screens to create the first animated character system prototype for sign language, producing text that combines iconic symbols and movement. Using animation to represent sign movements can increase resemblance to the live language, making the character system easier to learn. We explore this idea through the lens of American Sign Language (ASL), presenting 1) a pilot study underscoring the potential value of an animated ASL character system, 2) a structured approach for designing animations for an existing ASL character system, and 3) a design probe workshop with ASL users eliciting guidelines for the animated character system design.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123843600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mauro Ávila-Soto, Alexandra Voit, A. Hassan, A. Schmidt, Tonja Machulla
Tactile overlays for touch-screen devices are an opportunity to display content for users with visual impairments. However, when users switch tactile overlays, the displayed content on the touch-screen devices still correspond to the previous overlay. Currently, users have to change the displayed content on the touch-screen devices manually which hinders a fluid user interaction. In this paper, we introduce self-identifying overlays - an automated method for touch-screen devices to identify tactile overlays placed on the screen and to adapt the displayed content based on the applied tactile overlay. We report on a pilot study with two participants with visual impairments to evaluate this approach with a functional content exploration application based on an adapted textbook.
{"title":"Self-Identifying Tactile Overlays","authors":"Mauro Ávila-Soto, Alexandra Voit, A. Hassan, A. Schmidt, Tonja Machulla","doi":"10.1145/3234695.3241021","DOIUrl":"https://doi.org/10.1145/3234695.3241021","url":null,"abstract":"Tactile overlays for touch-screen devices are an opportunity to display content for users with visual impairments. However, when users switch tactile overlays, the displayed content on the touch-screen devices still correspond to the previous overlay. Currently, users have to change the displayed content on the touch-screen devices manually which hinders a fluid user interaction. In this paper, we introduce self-identifying overlays - an automated method for touch-screen devices to identify tactile overlays placed on the screen and to adapt the displayed content based on the applied tactile overlay. We report on a pilot study with two participants with visual impairments to evaluate this approach with a functional content exploration application based on an adapted textbook.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122177620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Foad Hamidi, Kellie Poneres, Aaron K. Massey, A. Hurst
Customizing assistive technologies based on user needs, abilities, and preferences is necessary for accessibility, especially for individuals whose abilities vary due to a diagnosis, medication, or other external factors. Adaptive Assistive Technologies (AATs) that can automatically monitor a user's current abilities and adapt functionality and appearance accordingly offer exciting solutions. However, there is an often-overlooked privacy tradeoff between usability and user privacy when designing such systems. We present a general privacy threat model analysis of AATs and contextualize it with findings from an interview study with older adults who experience pointing problems. We found that participants had positive attitude towards assistive technologies that gather their personal data but also had strong preferences for how their data should be used and who should have access to it. We identify a need to seriously consider privacy threats when designing assistive technologies to avoid exposing users to them.
{"title":"Who Should Have Access to my Pointing Data?: Privacy Tradeoffs of Adaptive Assistive Technologies","authors":"Foad Hamidi, Kellie Poneres, Aaron K. Massey, A. Hurst","doi":"10.1145/3234695.3239331","DOIUrl":"https://doi.org/10.1145/3234695.3239331","url":null,"abstract":"Customizing assistive technologies based on user needs, abilities, and preferences is necessary for accessibility, especially for individuals whose abilities vary due to a diagnosis, medication, or other external factors. Adaptive Assistive Technologies (AATs) that can automatically monitor a user's current abilities and adapt functionality and appearance accordingly offer exciting solutions. However, there is an often-overlooked privacy tradeoff between usability and user privacy when designing such systems. We present a general privacy threat model analysis of AATs and contextualize it with findings from an interview study with older adults who experience pointing problems. We found that participants had positive attitude towards assistive technologies that gather their personal data but also had strong preferences for how their data should be used and who should have access to it. We identify a need to seriously consider privacy threats when designing assistive technologies to avoid exposing users to them.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132006645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raymond Fok, Harmanpreet Kaur, Skanda Palani, Martez E. Mott, Walter S. Lasecki
Mobile, wearable, and other ubiquitous computing devices are increasingly creating a context in which conventional keyboard and screen-based inputs are being replaced in favor of more natural speech-based interactions. Digital personal assistants use speech to control a wide range of functionality, from environmental controls to information access. However, many deaf and hard-of-hearing users have speech patterns that vary from those of hearing users due to incomplete acoustic feedback from their own voices. Because automatic speech recognition (ASR) systems are largely trained using speech from hearing individuals, speech-controlled technologies are typically inaccessible to deaf users. Prior work has focused on providing deaf users access to aural output via real-time captioning or signing, but little has been done to improve users' ability to provide input to these systems' speech-based interfaces. Further, the vocalization patterns of deaf speech often make accurate recognition intractable for both automated systems and human listeners, making traditional approaches to mitigate ASR limitations, such as human captionists, less effective. To bridge this accessibility gap, we investigate the limitations of common speech recognition approaches and techniques---both automatic and human-powered---when applied to deaf speech. We then explore the effectiveness of an iterative crowdsourcing workflow, and characterize the potential for groups to collectively exceed the performance of individuals. This paper contributes a better understanding of the challenges of deaf speech recognition and provides insights for future system development in this space.
{"title":"Towards More Robust Speech Interactions for Deaf and Hard of Hearing Users","authors":"Raymond Fok, Harmanpreet Kaur, Skanda Palani, Martez E. Mott, Walter S. Lasecki","doi":"10.1145/3234695.3236343","DOIUrl":"https://doi.org/10.1145/3234695.3236343","url":null,"abstract":"Mobile, wearable, and other ubiquitous computing devices are increasingly creating a context in which conventional keyboard and screen-based inputs are being replaced in favor of more natural speech-based interactions. Digital personal assistants use speech to control a wide range of functionality, from environmental controls to information access. However, many deaf and hard-of-hearing users have speech patterns that vary from those of hearing users due to incomplete acoustic feedback from their own voices. Because automatic speech recognition (ASR) systems are largely trained using speech from hearing individuals, speech-controlled technologies are typically inaccessible to deaf users. Prior work has focused on providing deaf users access to aural output via real-time captioning or signing, but little has been done to improve users' ability to provide input to these systems' speech-based interfaces. Further, the vocalization patterns of deaf speech often make accurate recognition intractable for both automated systems and human listeners, making traditional approaches to mitigate ASR limitations, such as human captionists, less effective. To bridge this accessibility gap, we investigate the limitations of common speech recognition approaches and techniques---both automatic and human-powered---when applied to deaf speech. We then explore the effectiveness of an iterative crowdsourcing workflow, and characterize the potential for groups to collectively exceed the performance of individuals. This paper contributes a better understanding of the challenges of deaf speech recognition and provides insights for future system development in this space.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129826844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented reality (AR) systems that enhance visual capabilities could make text and other fine details more accessible for low vision users, improving independence and quality of life. Prior work has begun to investigate the potential of assistive AR, but recent advancements enable new AR visualizations and interactions not yet explored in the context of assistive technology. In this paper, we follow an iterative design process with feedback and suggestions from seven visually impaired participants, designing and testing AR magnification ideas using the Microsoft HoloLens. Participants identified several advantages to the concept of head-worn magnification (e.g., portability, privacy, ready availability), and to our AR designs in particular (e.g., a more natural reading experience and the ability to multitask). We discuss the strengths and weaknesses of this AR magnification approach and summarize lessons learned throughout the process.
{"title":"Design of an Augmented Reality Magnification Aid for Low Vision Users","authors":"Lee Stearns, Leah Findlater, Jon E. Froehlich","doi":"10.1145/3234695.3236361","DOIUrl":"https://doi.org/10.1145/3234695.3236361","url":null,"abstract":"Augmented reality (AR) systems that enhance visual capabilities could make text and other fine details more accessible for low vision users, improving independence and quality of life. Prior work has begun to investigate the potential of assistive AR, but recent advancements enable new AR visualizations and interactions not yet explored in the context of assistive technology. In this paper, we follow an iterative design process with feedback and suggestions from seven visually impaired participants, designing and testing AR magnification ideas using the Microsoft HoloLens. Participants identified several advantages to the concept of head-worn magnification (e.g., portability, privacy, ready availability), and to our AR designs in particular (e.g., a more natural reading experience and the ability to multitask). We discuss the strengths and weaknesses of this AR magnification approach and summarize lessons learned throughout the process.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125725876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaze typing for people with extreme motor disabilities like full body paralysis can be extremely slow and discouraging for daily communication. The most popular technique in gaze typing, known as dwell time typing, is based on fixation on every letter of the word for a fixed amount of time, to type the word. In this preliminary study, the goal was to test a new technique of gaze typing that requires fixating only on the first and the last letter of the word. Analysis of the data collected suggests that the newly described technique is 63% faster than dwell time typing for novices in gaze interaction, without influencing the error rate. Using this technique would have a tremendous impact on communication speed, comfort and working efficiency of people with disabilities.
{"title":"Gaze Typing using Multi-key Selection Technique","authors":"Tanya Bafna","doi":"10.1145/3234695.3240992","DOIUrl":"https://doi.org/10.1145/3234695.3240992","url":null,"abstract":"Gaze typing for people with extreme motor disabilities like full body paralysis can be extremely slow and discouraging for daily communication. The most popular technique in gaze typing, known as dwell time typing, is based on fixation on every letter of the word for a fixed amount of time, to type the word. In this preliminary study, the goal was to test a new technique of gaze typing that requires fixating only on the first and the last letter of the word. Analysis of the data collected suggests that the newly described technique is 63% faster than dwell time typing for novices in gaze interaction, without influencing the error rate. Using this technique would have a tremendous impact on communication speed, comfort and working efficiency of people with disabilities.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115041520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Jain, Rachel L. Franz, Leah Findlater, Jackson Cannon, R. Kushalnagar, Jon E. Froehlich
Prior work has explored communication challenges faced by people who are deaf and hard of hearing (DHH) and the potential role of new captioning and support technologies to address these challenges; however, the focus has been on stationary contexts such as group meetings and lectures. In this paper, we present two studies examining the needs of DHH people in moving contexts (e.g., walking) and the potential for mobile captions on head-mounted displays (HMDs) to support those needs. Our formative study with 12 DHH participants identifies social and environmental challenges unique to or exacerbated by moving contexts. Informed by these findings, we introduce and evaluate a proof-of-concept HMD prototype with 10 DHH participants. Results show that, while walking, HMD captions can support communication access and improve attentional balance between the speakers(s) and navigating the environment. We close by describing open questions in the mobile context space and design guidelines for future technology.
{"title":"Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing","authors":"D. Jain, Rachel L. Franz, Leah Findlater, Jackson Cannon, R. Kushalnagar, Jon E. Froehlich","doi":"10.1145/3234695.3236362","DOIUrl":"https://doi.org/10.1145/3234695.3236362","url":null,"abstract":"Prior work has explored communication challenges faced by people who are deaf and hard of hearing (DHH) and the potential role of new captioning and support technologies to address these challenges; however, the focus has been on stationary contexts such as group meetings and lectures. In this paper, we present two studies examining the needs of DHH people in moving contexts (e.g., walking) and the potential for mobile captions on head-mounted displays (HMDs) to support those needs. Our formative study with 12 DHH participants identifies social and environmental challenges unique to or exacerbated by moving contexts. Informed by these findings, we introduce and evaluate a proof-of-concept HMD prototype with 10 DHH participants. Results show that, while walking, HMD captions can support communication access and improve attentional balance between the speakers(s) and navigating the environment. We close by describing open questions in the mobile context space and design guidelines for future technology.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114899896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is our great pleasure to welcome you to the 9th ACM SIGACCESS Conference on Computers and Accessibility -- ASSETS'07. As in the past, ASSETS 2007 explores the potential of computer and information technologies to support and include everyone, regardless of age or disability. ASSETS is the premier forum for presenting innovative research on the design and use of both mainstream and specialized assistive technologies by people of all ages and with different capabilities, and those around them. The call for papers attracted 86 technical paper submissions from 18 countries spread over 5 continents. A further 33 poster and demonstration submissions were received by the poster and demonstration chairs, Anna Dickinson and Joy Goodman-Deane. All were peer-reviewed by an international program committee, in order to ensure that the accepted work truly represents the state of the art in accessibility. 27 papers and 21 posters and demonstrations were accepted. ASSETS 2007 continues its tradition of encouraging dialog through a single-track forum with opportunities for delegates to share results, mingle and discuss their work. This year, the conference opens with a keynote speech by Jonathan Wolpaw, professor and research physician at the Wadsworth Center, New York State Department of Health and State University of New York. His presentation describes the latest research in brain-computer interfaces for communication and control. The main conference program continues with seven technical paper sessions and two poster and demonstration sessions. These proceedings contain both the technical papers, and two-page extended abstracts for each of the poster and demonstration submissions. This year's program continues the SIGACCESS student research competition (SRC), sponsored by Microsoft Research. The SRC, chaired by Harriet Fell, is an opportunity for both graduate and undergraduate students to present their work at the conference in poster form. Abstracts from the accepted SRC submissions are included in these proceedings. At the conference, selected entrants will give a short presentation in the main program, and a panel of judges will select one or more finalists, who will be entered into the Grand Finals of ACM's Student Research Competition. As in previous years, the main program is preceded by a doctoral consortium, sponsored by the National Science Foundation and chaired by Clayton Lewis and Sri Kurniawan. This provides an opportunity for doctoral students in the early stages of research to present their work and receive feedback from peers and a selected pool of experts. All participants in the doctoral consortium will also present their work during one of the main conference poster sessions, and one participant, selected by the doctoral consortium committee, will give a presentation in a conference session. Following the tradition of the ASSETS conference series, two awards will be made at the conference: the SIGACCESS Best Paper Award,
{"title":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","authors":"Enrico Pontelli, S. Trewin","doi":"10.1145/3234695","DOIUrl":"https://doi.org/10.1145/3234695","url":null,"abstract":"It is our great pleasure to welcome you to the 9th ACM SIGACCESS Conference on Computers and Accessibility -- ASSETS'07. As in the past, ASSETS 2007 explores the potential of computer and information technologies to support and include everyone, regardless of age or disability. ASSETS is the premier forum for presenting innovative research on the design and use of both mainstream and specialized assistive technologies by people of all ages and with different capabilities, and those around them. \u0000 \u0000The call for papers attracted 86 technical paper submissions from 18 countries spread over 5 continents. A further 33 poster and demonstration submissions were received by the poster and demonstration chairs, Anna Dickinson and Joy Goodman-Deane. All were peer-reviewed by an international program committee, in order to ensure that the accepted work truly represents the state of the art in accessibility. 27 papers and 21 posters and demonstrations were accepted. \u0000 \u0000ASSETS 2007 continues its tradition of encouraging dialog through a single-track forum with opportunities for delegates to share results, mingle and discuss their work. This year, the conference opens with a keynote speech by Jonathan Wolpaw, professor and research physician at the Wadsworth Center, New York State Department of Health and State University of New York. His presentation describes the latest research in brain-computer interfaces for communication and control. The main conference program continues with seven technical paper sessions and two poster and demonstration sessions. These proceedings contain both the technical papers, and two-page extended abstracts for each of the poster and demonstration submissions. \u0000 \u0000This year's program continues the SIGACCESS student research competition (SRC), sponsored by Microsoft Research. The SRC, chaired by Harriet Fell, is an opportunity for both graduate and undergraduate students to present their work at the conference in poster form. Abstracts from the accepted SRC submissions are included in these proceedings. At the conference, selected entrants will give a short presentation in the main program, and a panel of judges will select one or more finalists, who will be entered into the Grand Finals of ACM's Student Research Competition. \u0000 \u0000As in previous years, the main program is preceded by a doctoral consortium, sponsored by the National Science Foundation and chaired by Clayton Lewis and Sri Kurniawan. This provides an opportunity for doctoral students in the early stages of research to present their work and receive feedback from peers and a selected pool of experts. All participants in the doctoral consortium will also present their work during one of the main conference poster sessions, and one participant, selected by the doctoral consortium committee, will give a presentation in a conference session. \u0000 \u0000Following the tradition of the ASSETS conference series, two awards will be made at the conference: the SIGACCESS Best Paper Award, ","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127017249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}