{"title":"Towards Non-Visual Web Search","authors":"Alexandra Vtyurina","doi":"10.1145/3295750.3298976","DOIUrl":null,"url":null,"abstract":"Speech-based user interfaces and, in particular, voice-activated digital assistants are gaining popularity. Assistants provide their users with an opportunity for hands-free interaction, and present an additional accessibility level for people who are blind. According to prior research, informational searches form a noticeable fraction of user interactions with the assistants. All major commercially available assistants handle factoid questions well by providing an answer that is quick, concise, and to-the-point. However, for complex information seeking intents, when a deeper exploration and multi-turn interaction may be required, the assistants often do not produce the desired results. One of the main challenges for designing a voice-based web search system is the higher cognitive load for audio perception compared to visual perception. Additionally, close attention should be paid at differences in designing for different user groups, as their information seeking styles and design needs and may differ. In this work, we discuss the challenges of designing systems for non-visual ad-hoc web search and exploration and outline a set of proposed experiments tackling various aspects of non-visual web search.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3295750.3298976","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Speech-based user interfaces and, in particular, voice-activated digital assistants are gaining popularity. Assistants provide their users with an opportunity for hands-free interaction, and present an additional accessibility level for people who are blind. According to prior research, informational searches form a noticeable fraction of user interactions with the assistants. All major commercially available assistants handle factoid questions well by providing an answer that is quick, concise, and to-the-point. However, for complex information seeking intents, when a deeper exploration and multi-turn interaction may be required, the assistants often do not produce the desired results. One of the main challenges for designing a voice-based web search system is the higher cognitive load for audio perception compared to visual perception. Additionally, close attention should be paid at differences in designing for different user groups, as their information seeking styles and design needs and may differ. In this work, we discuss the challenges of designing systems for non-visual ad-hoc web search and exploration and outline a set of proposed experiments tackling various aspects of non-visual web search.