{"title":"Developing an Interactive Agent for Blind and Visually Impaired People","authors":"V. Stragier, Omar Seddati, T. Dutoit","doi":"10.1145/3573381.3596471","DOIUrl":null,"url":null,"abstract":"The aim of this project is to create an interactive assistant that incorporates different assistive features for blind and visually impaired people. The assistant might incorporate screen readers, magnifiers, voice synthesis, OCR, GPS, face recognition, and object recognition among other tools. Recently, the work done by OpenAI and Be My Eyes with the implementation of GPT-4 is comparable to the aim of this project. It shows the development of an interactive assistant has become simpler due to recent developments in large language models. However, older methods like named entity recognition and intent classification are still valuable to build lightweight assistants. A hybrid solution combining both methods seems possible, would help to reduce the computational cost of the assistant, and would facilitate the data collection process. Despite being more complex to implement in a multilingual and multimodal context, a hybrid solution has the potential to be used offline and to consume less resources.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"99 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3573381.3596471","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The aim of this project is to create an interactive assistant that incorporates different assistive features for blind and visually impaired people. The assistant might incorporate screen readers, magnifiers, voice synthesis, OCR, GPS, face recognition, and object recognition among other tools. Recently, the work done by OpenAI and Be My Eyes with the implementation of GPT-4 is comparable to the aim of this project. It shows the development of an interactive assistant has become simpler due to recent developments in large language models. However, older methods like named entity recognition and intent classification are still valuable to build lightweight assistants. A hybrid solution combining both methods seems possible, would help to reduce the computational cost of the assistant, and would facilitate the data collection process. Despite being more complex to implement in a multilingual and multimodal context, a hybrid solution has the potential to be used offline and to consume less resources.
这个项目的目的是为盲人和视障人士创造一个包含不同辅助功能的互动助手。该助手可能包含屏幕阅读器、放大镜、语音合成、OCR、GPS、人脸识别和物体识别等工具。最近,OpenAI和Be My Eyes在实施GPT-4方面所做的工作与这个项目的目标相当。它表明,由于最近大型语言模型的发展,交互式助手的开发变得更加简单。然而,像命名实体识别和意图分类这样的老方法对于构建轻量级助手仍然很有价值。结合这两种方法的混合解决方案似乎是可能的,这将有助于减少助手的计算成本,并将促进数据收集过程。尽管在多语言和多模式上下文中实现起来更加复杂,但混合解决方案具有离线使用和消耗更少资源的潜力。