V. P. Campos, L. Gonçalves, Wesnydy L. Ribeiro, T. Araújo, T. G. do Rêgo, Pedro H. V. Figueiredo, Suanny Vieira, Thiago F. S. Costa, Caio Moraes, Alexandre C. S. Cruz, F. A. Araújo, Guido L. Souza Filho
{"title":"盲人和视障人士音频描述的机器生成","authors":"V. P. Campos, L. Gonçalves, Wesnydy L. Ribeiro, T. Araújo, T. G. do Rêgo, Pedro H. V. Figueiredo, Suanny Vieira, Thiago F. S. Costa, Caio Moraes, Alexandre C. S. Cruz, F. A. Araújo, Guido L. Souza Filho","doi":"10.1145/3590955","DOIUrl":null,"url":null,"abstract":"Automating the generation of audio descriptions (AD) for blind and visually impaired (BVI) people is a difficult task, since it has several challenges involved, such as: identifying gaps in dialogues; describing the essential elements; summarizing and fitting the descriptions into the dialogue gaps; generating an AD narration track, and synchronizing it with the main soundtrack. In our previous work (Campos et al. [6]), we propose a solution for automatic AD script generation, named CineAD, which uses the movie’s script as a basis for the AD generation. This article proposes extending this solution to complement the information extracted from the script and reduce its dependency based on the classification of visual information from the video. To assess the viability of the proposed solution, we implemented a proof of concept of the solution and evaluated it with 11 blind users. The results showed that the solution could generate a more succinct and objective AD but with a similar users’ level of understanding compared to our previous work. Thus, the solution can provide relevant information to blind users using less video time for descriptions.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":"134 1","pages":"1 - 28"},"PeriodicalIF":2.5000,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Machine Generation of Audio Description for Blind and Visually Impaired People\",\"authors\":\"V. P. Campos, L. Gonçalves, Wesnydy L. Ribeiro, T. Araújo, T. G. do Rêgo, Pedro H. V. Figueiredo, Suanny Vieira, Thiago F. S. Costa, Caio Moraes, Alexandre C. S. Cruz, F. A. Araújo, Guido L. Souza Filho\",\"doi\":\"10.1145/3590955\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automating the generation of audio descriptions (AD) for blind and visually impaired (BVI) people is a difficult task, since it has several challenges involved, such as: identifying gaps in dialogues; describing the essential elements; summarizing and fitting the descriptions into the dialogue gaps; generating an AD narration track, and synchronizing it with the main soundtrack. In our previous work (Campos et al. [6]), we propose a solution for automatic AD script generation, named CineAD, which uses the movie’s script as a basis for the AD generation. This article proposes extending this solution to complement the information extracted from the script and reduce its dependency based on the classification of visual information from the video. To assess the viability of the proposed solution, we implemented a proof of concept of the solution and evaluated it with 11 blind users. The results showed that the solution could generate a more succinct and objective AD but with a similar users’ level of understanding compared to our previous work. Thus, the solution can provide relevant information to blind users using less video time for descriptions.\",\"PeriodicalId\":54128,\"journal\":{\"name\":\"ACM Transactions on Accessible Computing\",\"volume\":\"134 1\",\"pages\":\"1 - 28\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2023-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Accessible Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3590955\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Accessible Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3590955","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Machine Generation of Audio Description for Blind and Visually Impaired People
Automating the generation of audio descriptions (AD) for blind and visually impaired (BVI) people is a difficult task, since it has several challenges involved, such as: identifying gaps in dialogues; describing the essential elements; summarizing and fitting the descriptions into the dialogue gaps; generating an AD narration track, and synchronizing it with the main soundtrack. In our previous work (Campos et al. [6]), we propose a solution for automatic AD script generation, named CineAD, which uses the movie’s script as a basis for the AD generation. This article proposes extending this solution to complement the information extracted from the script and reduce its dependency based on the classification of visual information from the video. To assess the viability of the proposed solution, we implemented a proof of concept of the solution and evaluated it with 11 blind users. The results showed that the solution could generate a more succinct and objective AD but with a similar users’ level of understanding compared to our previous work. Thus, the solution can provide relevant information to blind users using less video time for descriptions.
期刊介绍:
Computer and information technologies have re-designed the way modern society operates. Their widespread use poses both opportunities and challenges for people who experience various disabilities including age-related disabilities. That is, while there are new avenues to assist individuals with disabilities and provide tools and resources to alleviate the traditional barriers encountered by these individuals, in many cases the technology itself presents barriers to use. ACM Transactions on Accessible Computing (TACCESS) is a quarterly peer-reviewed journal that publishes refereed articles addressing issues of computing that seek to address barriers to access, either creating new solutions or providing for the more inclusive design of technology to provide access for individuals with diverse abilities. The journal provides a technical forum for disseminating innovative research that covers either applications of computing and information technologies to provide assistive systems or inclusive technologies for individuals with disabilities. Some examples are web accessibility for those with visual impairments and blindness as well as web search explorations for those with limited cognitive abilities, technologies to address stroke rehabilitation or dementia care, language support systems deaf signers or those with limited language abilities, and input systems for individuals with limited ability to control traditional mouse and keyboard systems. The journal is of particular interest to SIGACCESS members and delegates to its affiliated conference (i.e., ASSETS) as well as other international accessibility conferences. It serves as a forum for discussions and information exchange between researchers, clinicians, and educators; including rehabilitation personnel who administer assistive technologies; and policy makers concerned with equitable access to information technologies.