{"title":"Academic Libraries Can Develop AI Chatbots for Virtual Reference Services with Minimal Technical Knowledge and Limited Resources","authors":"Matthew Chase","doi":"10.18438/eblip30523","DOIUrl":null,"url":null,"abstract":"A Review of:\nRodriguez, S., & Mune, C. (2022). Uncoding library chatbots: Deploying a new virtual reference tool at the San Jose State University Library. Reference Services Review, 50(3), 392-405. https://doi.org/10.1108/RSR-05-2022-0020\nObjective – To describe the development of an artificial intelligence (AI) chatbot to support virtual reference services at an academic library.\nDesign – Case study.\nSetting – A public university library in the United States.\nSubjects – 1,682 chatbot-user interactions.\nMethods – A university librarian and two graduate student interns researched and developed an AI chatbot to meet virtual reference needs. Developed using chatbot development software, Dialogflow, the chatbot was populated with questions, keywords, and other training phrases entered during user inquiries, text-based responses to inquiries, and intents (i.e., programmed mappings between user inquiries and chatbot responses). The chatbot utilized natural language processing and AI training for basic circulation and reference questions, and included interactive elements and embeddable widgets supported by Kommunicate (i.e., a bot support platform for chat widgets). The chatbot was enabled after live reference hours were over. User interactions with the chatbot were collected across 18 months since its launch. The authors used analytics from Kommunicate and Dialogflow to examine user interactions.\nMain Results – User interactions increased gradually since the launch of the chatbot. The chatbot logged approximately 44 monthly interactions during the spring 2021 term, which increased to approximately 137 monthly interactions during the spring 2022 term. The authors identified the most common reasons for users to engage the chatbot, using the chatbot’s triggered intents from user inquiries. These reasons included information about hours for the library building and live reference services, finding library resources (e.g., peer-reviewed articles, books), getting help from a librarian, locating databases and research guides, information about borrowing library items (e.g., laptops, books), and reporting issues with library resources.\nConclusion – Libraries can successfully develop and train AI chatbots with minimal technical expertise and resources. The authors offered user experience considerations from their experience with the project, including editing library FAQs to be concise and easy to understand, testing and ensuring chatbot text and elements are accessible, and continuous maintenance of chatbot content. Kommunicate, Dialogflow, Google Analytics, and Crazy Egg (i.e., a web usage analytics tool) could not provide more in-depth user data (e.g., user clicks, scroll maps, heat maps), with plans to further explore other usage analysis software to collect the data. The authors noted that only 10% of users engaged the chatbot beyond the initial welcome prompt, requiring more research and user testing on how to facilitate user engagement.","PeriodicalId":45227,"journal":{"name":"Evidence Based Library and Information Practice","volume":null,"pages":null},"PeriodicalIF":0.4000,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evidence Based Library and Information Practice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18438/eblip30523","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0
Abstract
A Review of:
Rodriguez, S., & Mune, C. (2022). Uncoding library chatbots: Deploying a new virtual reference tool at the San Jose State University Library. Reference Services Review, 50(3), 392-405. https://doi.org/10.1108/RSR-05-2022-0020
Objective – To describe the development of an artificial intelligence (AI) chatbot to support virtual reference services at an academic library.
Design – Case study.
Setting – A public university library in the United States.
Subjects – 1,682 chatbot-user interactions.
Methods – A university librarian and two graduate student interns researched and developed an AI chatbot to meet virtual reference needs. Developed using chatbot development software, Dialogflow, the chatbot was populated with questions, keywords, and other training phrases entered during user inquiries, text-based responses to inquiries, and intents (i.e., programmed mappings between user inquiries and chatbot responses). The chatbot utilized natural language processing and AI training for basic circulation and reference questions, and included interactive elements and embeddable widgets supported by Kommunicate (i.e., a bot support platform for chat widgets). The chatbot was enabled after live reference hours were over. User interactions with the chatbot were collected across 18 months since its launch. The authors used analytics from Kommunicate and Dialogflow to examine user interactions.
Main Results – User interactions increased gradually since the launch of the chatbot. The chatbot logged approximately 44 monthly interactions during the spring 2021 term, which increased to approximately 137 monthly interactions during the spring 2022 term. The authors identified the most common reasons for users to engage the chatbot, using the chatbot’s triggered intents from user inquiries. These reasons included information about hours for the library building and live reference services, finding library resources (e.g., peer-reviewed articles, books), getting help from a librarian, locating databases and research guides, information about borrowing library items (e.g., laptops, books), and reporting issues with library resources.
Conclusion – Libraries can successfully develop and train AI chatbots with minimal technical expertise and resources. The authors offered user experience considerations from their experience with the project, including editing library FAQs to be concise and easy to understand, testing and ensuring chatbot text and elements are accessible, and continuous maintenance of chatbot content. Kommunicate, Dialogflow, Google Analytics, and Crazy Egg (i.e., a web usage analytics tool) could not provide more in-depth user data (e.g., user clicks, scroll maps, heat maps), with plans to further explore other usage analysis software to collect the data. The authors noted that only 10% of users engaged the chatbot beyond the initial welcome prompt, requiring more research and user testing on how to facilitate user engagement.