{"title":"Developing a comprehensive evaluation questionnaire for university FAQ administration chatbots","authors":"Luthfiya. Essop, Alveen Singh, J. Wing","doi":"10.1109/ICTAS56421.2023.10082753","DOIUrl":null,"url":null,"abstract":"A chatbot is a domain-specific conversational interface that mimics human assistance for users of various systems. Recently chatbots have received much research interest in supporting university administrative operations. However, rapid and large-scale implementation of chatbots in university administration systems remains challenged. Extant literature reflects on this challenge from various perspectives including, technical, managerial and socio-technical lenses. This paper heralds a somewhat overlooked perspective namely, the processes and techniques for concise and rigorous evaluation of these chatbots. The distinctiveness of this paper lies in the tri-perspectives of anthropomorphism, usability and user experience which converge to provide a stronger lens for chatbot evaluation particularly in a university administration setting. Recent studies primarily devise heuristic methods that tend to evaluate chatbots in silos such as, user interface, usability or the conversation ability and quality. There is a noticeable lack of research that attempts combination of these seemingly complex areas of chatbot evaluation. This paper postulates improved rigour of evaluation if coverage is expanded to usability, anthropomorphism, acceptance, usage, and user interface. The aim of this paper is therefore, to design a novel evaluation instrument tailored for a university administration chatbot. This is achieved by implementing the well-known Unified Technology Acceptance and Use of Technology framework as the architectural underpinning. Constituent components of the instrument derive from recent literature and emerging trends in frequently asked questions-based chatbot evaluation. The major contribution stems from the identification and insertion of key and overlooked evaluation perspectives which culminate in a more rigorous and a more encompassing evaluation questionnaire.","PeriodicalId":158720,"journal":{"name":"2023 Conference on Information Communications Technology and Society (ICTAS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Conference on Information Communications Technology and Society (ICTAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAS56421.2023.10082753","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
A chatbot is a domain-specific conversational interface that mimics human assistance for users of various systems. Recently chatbots have received much research interest in supporting university administrative operations. However, rapid and large-scale implementation of chatbots in university administration systems remains challenged. Extant literature reflects on this challenge from various perspectives including, technical, managerial and socio-technical lenses. This paper heralds a somewhat overlooked perspective namely, the processes and techniques for concise and rigorous evaluation of these chatbots. The distinctiveness of this paper lies in the tri-perspectives of anthropomorphism, usability and user experience which converge to provide a stronger lens for chatbot evaluation particularly in a university administration setting. Recent studies primarily devise heuristic methods that tend to evaluate chatbots in silos such as, user interface, usability or the conversation ability and quality. There is a noticeable lack of research that attempts combination of these seemingly complex areas of chatbot evaluation. This paper postulates improved rigour of evaluation if coverage is expanded to usability, anthropomorphism, acceptance, usage, and user interface. The aim of this paper is therefore, to design a novel evaluation instrument tailored for a university administration chatbot. This is achieved by implementing the well-known Unified Technology Acceptance and Use of Technology framework as the architectural underpinning. Constituent components of the instrument derive from recent literature and emerging trends in frequently asked questions-based chatbot evaluation. The major contribution stems from the identification and insertion of key and overlooked evaluation perspectives which culminate in a more rigorous and a more encompassing evaluation questionnaire.