S. Ludi, Alex Canter, Lindsey Ellis, Abhishek Shrestha
Following along with course lecture material is a critical challenge for low vision students. Access Lecture is a mobile, touch-screen application that will aid low vision students in viewing class notes in real-time. This paper presents the system overview, features, and initial feedback on the system. Current status and next steps are also presented.
{"title":"Access lecture: a mobile application providing visual access to classroom material","authors":"S. Ludi, Alex Canter, Lindsey Ellis, Abhishek Shrestha","doi":"10.1145/2049536.2049578","DOIUrl":"https://doi.org/10.1145/2049536.2049578","url":null,"abstract":"Following along with course lecture material is a critical challenge for low vision students. Access Lecture is a mobile, touch-screen application that will aid low vision students in viewing class notes in real-time. This paper presents the system overview, features, and initial feedback on the system. Current status and next steps are also presented.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130900474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Supporting visual interaction","authors":"A. Sears","doi":"10.1145/3253161","DOIUrl":"https://doi.org/10.1145/3253161","url":null,"abstract":"","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131921990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ambient Assisted Living environments provide support to people with disabilities and elderly people, usually at home. This concept can be extended to public spaces, where ubiquitous accessible services allow people with disabilities to access intelligent machines such as information kiosks. One of the key issues in achieving full accessibility is the instantaneous generation of an adapted accessible interface suited to the specific user that requests the service. In this paper we present the method used by the EGOKI interface generator to select the most suitable interaction resources and modalities for each user in the automatic creation of the interface. The validation of the interfaces generated for four different types of users is presented and discussed.
{"title":"Automatically generating tailored accessible user interfaces for ubiquitous services","authors":"J. Abascal, Amaia Aizpurua, Idoia Cearreta, Borja Gamecho, Nestor Garay-Vitoria, Raúl Miñón","doi":"10.1145/2049536.2049570","DOIUrl":"https://doi.org/10.1145/2049536.2049570","url":null,"abstract":"Ambient Assisted Living environments provide support to people with disabilities and elderly people, usually at home. This concept can be extended to public spaces, where ubiquitous accessible services allow people with disabilities to access intelligent machines such as information kiosks. One of the key issues in achieving full accessibility is the instantaneous generation of an adapted accessible interface suited to the specific user that requests the service. In this paper we present the method used by the EGOKI interface generator to select the most suitable interaction resources and modalities for each user in the automatic creation of the interface. The validation of the interfaces generated for four different types of users is presented and discussed.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131403083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a method of visual tracking in recordings of isolated signs and the usage of the tracked features for automatic sign categorization. The tracking method is based on skin color segmentation and is suitable for recordings of a sign language dictionary. The result of the tracking is the location and outer contour of head and both hands. These features are used to categorize the signs into several categories: movement of hands, contact of body parts, symmetry of trajectory, location of the sign.
{"title":"Automatic sign categorization using visual data","authors":"Marek Hruúz","doi":"10.1145/2049536.2049581","DOIUrl":"https://doi.org/10.1145/2049536.2049581","url":null,"abstract":"This paper presents a method of visual tracking in recordings of isolated signs and the usage of the tracked features for automatic sign categorization. The tracking method is based on skin color segmentation and is suitable for recordings of a sign language dictionary. The result of the tracking is the location and outer contour of head and both hands. These features are used to categorize the signs into several categories: movement of hands, contact of body parts, symmetry of trajectory, location of the sign.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"173 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132469801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: User-centric design","authors":"Matt Huenerfauth","doi":"10.1145/3253156","DOIUrl":"https://doi.org/10.1145/3253156","url":null,"abstract":"","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"51 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129360854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Kwan, Isaac Paquette, John J. Magee, Paul Y. Lee, Margrit Betke
Camera-based mouse-replacement systems allow people with motor impairments to control the mouse pointer with head movements if they are unable to use their hands. To address the difficulties of accidental clicking and usable simulation of a real computer mouse, we developed Click Control, a tool to augment the functionality of these systems. When a user attempts to click, Click Control displays a form that allows him or her to cancel the click if it was accidental, or send different types of clicks with an easy-to-use gesture interface. Initial studies of a prototype with users with motor impairments showed that Click Control improved their mouse control experiences.
{"title":"Click control: improving mouse interaction for people with motor impairments","authors":"Christopher Kwan, Isaac Paquette, John J. Magee, Paul Y. Lee, Margrit Betke","doi":"10.1145/2049536.2049582","DOIUrl":"https://doi.org/10.1145/2049536.2049582","url":null,"abstract":"Camera-based mouse-replacement systems allow people with motor impairments to control the mouse pointer with head movements if they are unable to use their hands. To address the difficulties of accidental clicking and usable simulation of a real computer mouse, we developed Click Control, a tool to augment the functionality of these systems. When a user attempts to click, Click Control displays a form that allows him or her to cancel the click if it was accidental, or send different types of clicks with an easy-to-use gesture interface. Initial studies of a prototype with users with motor impairments showed that Click Control improved their mouse control experiences.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130790818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rolf Black, A. Waller, N. Tintarev, Ehud Reiter, Joseph Reddington
Currently available commercial Augmentative and Alternative Communication (AAC) technology makes little use of computing power to improve the access to words and phrases for personal narrative, an essential part of social interaction. In this paper, we describe the development and evaluation of a mobile phone application to enable data collection for a personal narrative system for children with severe speech and physical impairments (SSPI). Based on user feedback from the previous project "How was School today?" we developed a modular system where school staff can use a mobile phone to track interaction with people and objects and user location at school. The phone also allows taking digital photographs and recording voice message sets by both school staff and parents/carers at home. These sets can be played back by the child for immediate narrative sharing similar to established AAC device interaction using sequential voice recorders. The mobile phone sends all the gathered data to a remote server. The data can then be used for automatic narrative generation on the child's PC based communication aid. Early results from the ongoing evaluation of the application in a special school with two participants and school staff show that staff were able to track interactions, record voice messages and take photographs. Location tracking was less successful, but was supplemented by timetable information. The participating children were able to play back voice messages and show photographs on the mobile phone for interactive narrative sharing using both direct and switch activated playback options.
{"title":"A mobile phone based personal narrative system","authors":"Rolf Black, A. Waller, N. Tintarev, Ehud Reiter, Joseph Reddington","doi":"10.1145/2049536.2049568","DOIUrl":"https://doi.org/10.1145/2049536.2049568","url":null,"abstract":"Currently available commercial Augmentative and Alternative Communication (AAC) technology makes little use of computing power to improve the access to words and phrases for personal narrative, an essential part of social interaction. In this paper, we describe the development and evaluation of a mobile phone application to enable data collection for a personal narrative system for children with severe speech and physical impairments (SSPI). Based on user feedback from the previous project \"How was School today?\" we developed a modular system where school staff can use a mobile phone to track interaction with people and objects and user location at school. The phone also allows taking digital photographs and recording voice message sets by both school staff and parents/carers at home. These sets can be played back by the child for immediate narrative sharing similar to established AAC device interaction using sequential voice recorders. The mobile phone sends all the gathered data to a remote server. The data can then be used for automatic narrative generation on the child's PC based communication aid. Early results from the ongoing evaluation of the application in a special school with two participants and school staff show that staff were able to track interactions, record voice messages and take photographs. Location tracking was less successful, but was supplemented by timetable information. The participating children were able to play back voice messages and show photographs on the mobile phone for interactive narrative sharing using both direct and switch activated playback options.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123489029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Hrúz, P. Campr, Z. Krňoul, M. Železný, O. Aran, P. Santemiz
This paper presents the design of a multimodal sign-language-enabled dialogue system. Its functionality was tested on a prototype of an information kiosk for the deaf people providing information about train connections. We use an automatic computer-vision-based sign language recognition, automatic speech recognition and touchscreen as input modalities. The outputs are shown on a screen displaying 3D signing avatar and on a touchscreen displaying graphical user interface. The information kiosk can be used both by hearing users and deaf users in several languages. We focus on description of sign language input and output modality.
{"title":"Multi-modal dialogue system with sign language capabilities","authors":"M. Hrúz, P. Campr, Z. Krňoul, M. Železný, O. Aran, P. Santemiz","doi":"10.1145/2049536.2049599","DOIUrl":"https://doi.org/10.1145/2049536.2049599","url":null,"abstract":"This paper presents the design of a multimodal sign-language-enabled dialogue system. Its functionality was tested on a prototype of an information kiosk for the deaf people providing information about train connections. We use an automatic computer-vision-based sign language recognition, automatic speech recognition and touchscreen as input modalities. The outputs are shown on a screen displaying 3D signing avatar and on a touchscreen displaying graphical user interface. The information kiosk can be used both by hearing users and deaf users in several languages. We focus on description of sign language input and output modality.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125981827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Sign language comprehension","authors":"S. Trewin","doi":"10.1145/3253157","DOIUrl":"https://doi.org/10.1145/3253157","url":null,"abstract":"","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"42 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131076313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José Coelho, Carlos M. Duarte, P. Biswas, P. Langdon
The development of TV applications nowadays excludes users with certain impairments from interacting with and accessing the same type of contents as other users do. Developers are also not interested in developing new or different versions of applications targeting different user characteristics. In this paper we describe a novel adaptive accessibility approach on how to develop accessible TV applications, without requiring too much additional effort from the developers. Integrating multimodal interaction, adaptation techniques and the use of simulators in the design process, we show how to adapt User Interfaces to the individual needs and limitations of elderly users. For this, we rely on the identification of the most relevant impairment configurations among users in practical user-trials, and we draw a relation with user specific characteristics. We provide guidelines for more accessible and centered TV application development.
{"title":"Developing accessible TV applications","authors":"José Coelho, Carlos M. Duarte, P. Biswas, P. Langdon","doi":"10.1145/2049536.2049561","DOIUrl":"https://doi.org/10.1145/2049536.2049561","url":null,"abstract":"The development of TV applications nowadays excludes users with certain impairments from interacting with and accessing the same type of contents as other users do. Developers are also not interested in developing new or different versions of applications targeting different user characteristics. In this paper we describe a novel adaptive accessibility approach on how to develop accessible TV applications, without requiring too much additional effort from the developers. Integrating multimodal interaction, adaptation techniques and the use of simulators in the design process, we show how to adapt User Interfaces to the individual needs and limitations of elderly users. For this, we rely on the identification of the most relevant impairment configurations among users in practical user-trials, and we draw a relation with user specific characteristics. We provide guidelines for more accessible and centered TV application development.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129906622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}