Joshua Wade, Heathman S Nichols, Megan Ichinose, Dayi Bian, Esube Bekele, Matthew Snodgress, Ashwaq Zaini Amat, Eric Granholm, Sohee Park, Nilanjan Sarkar
Emotion recognition impairment is a core feature of schizophrenia (SZ), present throughout all stages of this condition, and leads to poor social outcome. However, the underlying mechanisms that give rise to such deficits have not been elucidated and hence, it has been difficult to develop precisely targeted interventions. Evidence supports the use of methods designed to modify patterns of visual attention in individuals with SZ in order to effect meaningful improvements in social cognition. To date, however, attention-shaping systems have not fully utilized available technology (e.g., eye tracking) to achieve this goal. The current work consisted of the design and feasibility testing of a novel gaze-sensitive social skills intervention system called MASI-VR. Adults from an outpatient clinic with confirmed SZ diagnosis (n=10) and a comparison sample of neurotypical participants (n=10) were evaluated on measures of emotion recognition and visual attention at baseline assessment, and a pilot test of the intervention system was evaluated on the SZ sample following five training sessions over three weeks. Consistent with the literature, participants in the SZ group demonstrated lower recognition of faces showing medium intensity fear, spent more time deliberating about presented emotions, and had fewer fixations in comparison to neurotypical peers. Furthermore, participants in the SZ group showed significant improvement in the recognition of fearful faces post-training. Preliminary evidence supports the feasibility of a gaze-sensitive paradigm for use in assessment and training of emotion recognition and social attention in individuals with SZ, thus warranting further evaluation of the novel intervention.
{"title":"Extraction of Emotional Information via Visual Scanning Patterns: A Feasibility Study of Participants with Schizophrenia and Neurotypical Individuals.","authors":"Joshua Wade, Heathman S Nichols, Megan Ichinose, Dayi Bian, Esube Bekele, Matthew Snodgress, Ashwaq Zaini Amat, Eric Granholm, Sohee Park, Nilanjan Sarkar","doi":"10.1145/3282434","DOIUrl":"https://doi.org/10.1145/3282434","url":null,"abstract":"<p><p>Emotion recognition impairment is a core feature of schizophrenia (SZ), present throughout all stages of this condition, and leads to poor social outcome. However, the underlying mechanisms that give rise to such deficits have not been elucidated and hence, it has been difficult to develop precisely targeted interventions. Evidence supports the use of methods designed to modify patterns of visual attention in individuals with SZ in order to effect meaningful improvements in social cognition. To date, however, attention-shaping systems have not fully utilized available technology (e.g., eye tracking) to achieve this goal. The current work consisted of the design and feasibility testing of a novel gaze-sensitive social skills intervention system called MASI-VR. Adults from an outpatient clinic with confirmed SZ diagnosis (n=10) and a comparison sample of neurotypical participants (n=10) were evaluated on measures of emotion recognition and visual attention at baseline assessment, and a pilot test of the intervention system was evaluated on the SZ sample following five training sessions over three weeks. Consistent with the literature, participants in the SZ group demonstrated lower recognition of faces showing medium intensity fear, spent more time deliberating about presented emotions, and had fewer fixations in comparison to neurotypical peers. Furthermore, participants in the SZ group showed significant improvement in the recognition of fearful faces post-training. Preliminary evidence supports the feasibility of a gaze-sensitive paradigm for use in assessment and training of emotion recognition and social attention in individuals with SZ, thus warranting further evaluation of the novel intervention.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"11 4","pages":""},"PeriodicalIF":2.4,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3282434","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36849824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile optical character recognition (OCR) apps have come of age. Many blind individuals use them on a daily basis. The usability of such tools, however, is limited by the requirement that a good picture of the text to be read must be taken, something that is difficult to do without sight. Some mobile OCR apps already implement auto-shot and guidance mechanisms to facilitate this task. In this paper, we describe two experiments with blind participants, who tested these two interactive mechanisms on a customized iPhone implementation. These experiments bring to light a number of interesting aspects of accessing a printed document without sight, and enable a comparative analysis of the available interaction modalities.
{"title":"Improving the Accessibility of Mobile OCR Apps Via Interactive Modalities.","authors":"Michael Cutter, Roberto Manduchi","doi":"10.1145/3075300","DOIUrl":"10.1145/3075300","url":null,"abstract":"<p><p>Mobile optical character recognition (OCR) apps have come of age. Many blind individuals use them on a daily basis. The usability of such tools, however, is limited by the requirement that a good picture of the text to be read must be taken, something that is difficult to do without sight. Some mobile OCR apps already implement auto-shot and guidance mechanisms to facilitate this task. In this paper, we describe two experiments with blind participants, who tested these two interactive mechanisms on a customized iPhone implementation. These experiments bring to light a number of interesting aspects of accessing a printed document without sight, and enable a comparative analysis of the available interaction modalities.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"10 4","pages":""},"PeriodicalIF":2.4,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5736157/pdf/nihms904001.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35681895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyle Rector, Roger Vilardaga, Leo Lansky, Kellie Lu, Cynthia L Bennett, Richard E Ladner, Julie A Kientz
People who are blind or low vision may have a harder time participating in exercise due to inaccessibility or lack of encouragement. To address this, we developed Eyes-Free Yoga using the Microsoft Kinect that acts as a yoga instructor and has personalized auditory feedback based on skeletal tracking. We conducted two different studies on two different versions of Eyes-Free Yoga: (1) a controlled study with 16 people who are blind or low vision to evaluate the feasibility of a proof-of-concept and (2) an 8-week in-home deployment study with 4 people who are blind or low vision, with a fully functioning exergame containing four full workouts and motivational techniques. We found that participants preferred the personalized feedback for yoga postures during the laboratory study. Therefore, the personalized feedback was used as a means to build the core components of the system used in the deployment study and was included in both study conditions. From the deployment study, we found that the participants practiced Yoga consistently throughout the 8-week period (Average hours = 17; Average days of practice = 24), almost reaching the American Heart Association recommended exercise guidelines. On average, motivational techniques increased participant's user experience and their frequency and exercise time. The findings of this work have implications for eyes-free exergame design, including engaging domain experts, piloting with inexperienced users, using musical metaphors, and designing for in-home use cases.
{"title":"Design and Real-World Evaluation of Eyes-Free Yoga: An Exergame for Blind and Low-Vision Exercise.","authors":"Kyle Rector, Roger Vilardaga, Leo Lansky, Kellie Lu, Cynthia L Bennett, Richard E Ladner, Julie A Kientz","doi":"10.1145/3022729","DOIUrl":"10.1145/3022729","url":null,"abstract":"<p><p>People who are blind or low vision may have a harder time participating in exercise due to inaccessibility or lack of encouragement. To address this, we developed Eyes-Free Yoga using the Microsoft Kinect that acts as a yoga instructor and has personalized auditory feedback based on skeletal tracking. We conducted two different studies on two different versions of Eyes-Free Yoga: (1) a controlled study with 16 people who are blind or low vision to evaluate the feasibility of a proof-of-concept and (2) an 8-week in-home deployment study with 4 people who are blind or low vision, with a fully functioning exergame containing four full workouts and motivational techniques. We found that participants preferred the personalized feedback for yoga postures during the laboratory study. Therefore, the personalized feedback was used as a means to build the core components of the system used in the deployment study and was included in both study conditions. From the deployment study, we found that the participants practiced Yoga consistently throughout the 8-week period (Average hours = 17; Average days of practice = 24), almost reaching the American Heart Association recommended exercise guidelines. On average, motivational techniques increased participant's user experience and their frequency and exercise time. The findings of this work have implications for eyes-free exergame design, including engaging domain experts, piloting with inexperienced users, using musical metaphors, and designing for in-home use cases.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"9 4","pages":""},"PeriodicalIF":2.4,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5667683/pdf/nihms883315.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35226251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dragan Ahmetovic, Roberto Manduchi, James M Coughlan, Sergio Mascetti
For blind travelers, finding crosswalks and remaining within their borders while traversing them is a crucial part of any trip involving street crossings. While standard Orientation & Mobility (O&M) techniques allow blind travelers to safely negotiate street crossings, additional information about crosswalks and other important features at intersections would be helpful in many situations, resulting in greater safety and/or comfort during independent travel. For instance, in planning a trip a blind pedestrian may wish to be informed of the presence of all marked crossings near a desired route. We have conducted a survey of several O&M experts from the United States and Italy to determine the role that crosswalks play in travel by blind pedestrians. The results show stark differences between survey respondents from the U.S. compared with Italy: the former group emphasized the importance of following standard O&M techniques at all legal crossings (marked or unmarked), while the latter group strongly recommended crossing at marked crossings whenever possible. These contrasting opinions reflect differences in the traffic regulations of the two countries and highlight the diversity of needs that travelers in different regions may have. To address the challenges faced by blind pedestrians in negotiating street crossings, we devised a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm can be improved by a final crowdsourcing validation. To this end, we developed a Pedestrian Crossing Human Validation (PCHV) web service, which supports crowdsourcing to rule out false positives and identify false negatives.
{"title":"Mind your crossings: Mining GIS imagery for crosswalk localization.","authors":"Dragan Ahmetovic, Roberto Manduchi, James M Coughlan, Sergio Mascetti","doi":"10.1145/3046790","DOIUrl":"10.1145/3046790","url":null,"abstract":"<p><p>For blind travelers, finding crosswalks and remaining within their borders while traversing them is a crucial part of any trip involving street crossings. While standard Orientation & Mobility (O&M) techniques allow blind travelers to safely negotiate street crossings, additional information about crosswalks and other important features at intersections would be helpful in many situations, resulting in greater safety and/or comfort during independent travel. For instance, in planning a trip a blind pedestrian may wish to be informed of the presence of all marked crossings near a desired route. We have conducted a survey of several O&M experts from the United States and Italy to determine the role that crosswalks play in travel by blind pedestrians. The results show stark differences between survey respondents from the U.S. compared with Italy: the former group emphasized the importance of following standard O&M techniques at all legal crossings (marked or unmarked), while the latter group strongly recommended crossing at marked crossings whenever possible. These contrasting opinions reflect differences in the traffic regulations of the two countries and highlight the diversity of needs that travelers in different regions may have. To address the challenges faced by blind pedestrians in negotiating street crossings, we devised a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm can be improved by a final crowdsourcing validation. To this end, we developed a Pedestrian Crossing Human Validation (PCHV) web service, which supports crowdsourcing to rule out false positives and identify false negatives.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"9 4","pages":""},"PeriodicalIF":2.4,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3046790","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35228443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gazihan Alankus, Rachel Proffitt, Caitlin L. Kelleher, J. Engsberg
In the United States alone, more than five million people are living with long term motor impairments caused by a stroke. Video game-based therapies show promise in helping people recover lost range of motion and motor control. While researchers have demonstrated the potential utility of game-based rehabilitation through controlled studies, relatively little work has explored longer-term home-based use of therapeutic games. We conducted a six-week home study with a 62 year old woman who was seventeen years post-stroke. She played therapeutic games for approximately one hour a day, five days a week. Over the six weeks, she recovered significant motor abilities, which is unexpected given the time since her stroke. Through observations and interviews, we present lessons learned about the barriers and opportunities that arise from long-term home-based use of therapeutic games.
{"title":"Stroke therapy through motion-based games: a case study","authors":"Gazihan Alankus, Rachel Proffitt, Caitlin L. Kelleher, J. Engsberg","doi":"10.1145/1878803.1878842","DOIUrl":"https://doi.org/10.1145/1878803.1878842","url":null,"abstract":"In the United States alone, more than five million people are living with long term motor impairments caused by a stroke. Video game-based therapies show promise in helping people recover lost range of motion and motor control. While researchers have demonstrated the potential utility of game-based rehabilitation through controlled studies, relatively little work has explored longer-term home-based use of therapeutic games. We conducted a six-week home study with a 62 year old woman who was seventeen years post-stroke. She played therapeutic games for approximately one hour a day, five days a week. Over the six weeks, she recovered significant motor abilities, which is unexpected given the time since her stroke. Through observations and interviews, we present lessons learned about the barriers and opportunities that arise from long-term home-based use of therapeutic games.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"93 1","pages":"3"},"PeriodicalIF":2.4,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75003493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this study was to determine whether the presence or absence of digitized 1-2 word voice output on a direct selection, customized augmentative and alternative communication (AAC) device would affect the impoverished conversations of persons with dementia. Thirty adults with moderate Alzheimer's disease participated in two personally relevant conversations with an AAC device. For 12 of the participants the AAC device included voice output. The AAC device was the Flexiboard™ containing 16 messages needed to discuss a favorite autobiographical topic chosen by the participant and his/her family caregivers. Ten-minute conversations were videotaped in participants' residences and analyzed for four conversational measures related to the participants' communicative behavior. Results show that AAC devices with digitized voice output depress conversational performance and distract participants with moderate Alzheimer's disease as compared to similar devices without voice output. There were significantly more 1-word utterances and fewer total utterances when AAC devices included voice output, and the rate of topic elaborations/initiations was significantly lower when voice output was present. Discussion about the novelty of voice output for this population of elders and the need to train elders to use this technology is provided.
{"title":"The Effect of Voice Output on the AAC-Supported Conversations of Persons with Alzheimer's Disease.","authors":"Melanie Fried-Oken, Charity Rowland, Glory Baker, Mayling Dixon, Carolyn Mills, Darlene Schultz, Barry Oken","doi":"10.1145/1497302.1497305","DOIUrl":"https://doi.org/10.1145/1497302.1497305","url":null,"abstract":"<p><p>The purpose of this study was to determine whether the presence or absence of digitized 1-2 word voice output on a direct selection, customized augmentative and alternative communication (AAC) device would affect the impoverished conversations of persons with dementia. Thirty adults with moderate Alzheimer's disease participated in two personally relevant conversations with an AAC device. For 12 of the participants the AAC device included voice output. The AAC device was the Flexiboard™ containing 16 messages needed to discuss a favorite autobiographical topic chosen by the participant and his/her family caregivers. Ten-minute conversations were videotaped in participants' residences and analyzed for four conversational measures related to the participants' communicative behavior. Results show that AAC devices with digitized voice output depress conversational performance and distract participants with moderate Alzheimer's disease as compared to similar devices without voice output. There were significantly more 1-word utterances and fewer total utterances when AAC devices included voice output, and the rate of topic elaborations/initiations was significantly lower when voice output was present. Discussion about the novelty of voice output for this population of elders and the need to train elders to use this technology is provided.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"1 3","pages":"15"},"PeriodicalIF":2.4,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/1497302.1497305","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30028671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}