{"title":"LibQUAL+结果带来的问题多于答案","authors":"Kimberly Vardeman, Jingjing Wu","doi":"10.29242/lac.2018.34","DOIUrl":null,"url":null,"abstract":"The Texas Tech University Libraries conducted the LibQUAL survey in 2017. After receiving the survey results, the libraries had many unanswered questions—what is the next step? What are the problem areas? Which problems should be addressed first? The website was identified as a topic that merited further study. The user experience (UX) department collaborated with the web librarian to outline projects to gather more evidence that would guide their action. They used a variety of research methods to assess the website: X/O tests to allocate valuable home page real estate to the services and features of most interest to users; card sorting to design a more understandable website navigation; usability testing to evaluate whether common tasks could be performed easily; heuristic evaluations of frequently used webpages to see if they conformed to accepted usability principles; A/B tests to compare different design prototypes; and subsequent surveys to re-evaluate the modifications. By the triangulation of several data sources, they made informed decisions about how to improve the website. As an initial step, LibQUAL does not offer specific answers, but suggests potential directions for further study. This paper describes ways to iteratively test the UX of a website using several complementary methods following an exploratory survey. These strategies extend the value of survey results, making assessments more effective and practical. This pattern can be used not only for a website but for evaluating other services. Introduction In 2011, the Texas Tech University (TTU) Libraries conducted the LibQUAL survey. After receiving the results, the library dean made the comment, “LibQUAL results bring more questions than answers.” At that time, the results were not well disseminated beyond administration, and limited action was taken in response to the survey. In 2017, under a different dean and with a newly-formed user experience (UX) department, the TTU Libraries opted to conduct LibQUAL again. They used a census and received 3,631 valid surveys—a sizable increase over the 584 received in 2011 when they used a sampling method. Participants came from all the sub-groups defined by LibQUAL, and their subject areas covered all the disciplines the university offered. In addition, 1,433 participants shared comments and suggestions about the libraries’ services and resources. The libraries wanted the LibQUAL findings to have a greater impact on services, resources, and spaces than they had in 2011—how to interpret the results, how to share them, and how to make improvements became a challenge for the libraries and the UX department. Comment Coding and Data Revisualization Reviewing best practices for interpreting LibQUAL results was a useful starting point, such as the report, “Libraries Act on Their LibQUAL+ Findings: From Data to Action.”1 There were a few presentations that focused on the practical aspects of analysis: “Analysis and Interpretation of the LibQUAL Results,”2 and “It’s Not about You! Using LibQUAL at Brown University and at the University of Connecticut Libraries.”3 The latter was informative about the importance of coding comment data. Both presentations gave useful guidance on interpreting charts and understanding zones of tolerance. They provided instructions for identifying what was actionable from the results by cross-tabulating desired and adequacy mean scores in order to determine what users rated as most needed but least adequate. Organizing the open-ended comments was a key part of the analysis process. The LibQUAL email list was tremendously helpful at this stage as other librarians discussed their strategies for reviewing comments, such as using Brown University’s 2005 Codebook4 as a guide, using an emergent coding strategy,5 and developing a “comments slicer” in Excel.6 The initial work was completed by the UX department’s library associate (a","PeriodicalId":193553,"journal":{"name":"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LibQUAL+ Results Bring More Questions Than Answers\",\"authors\":\"Kimberly Vardeman, Jingjing Wu\",\"doi\":\"10.29242/lac.2018.34\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Texas Tech University Libraries conducted the LibQUAL survey in 2017. After receiving the survey results, the libraries had many unanswered questions—what is the next step? What are the problem areas? Which problems should be addressed first? The website was identified as a topic that merited further study. The user experience (UX) department collaborated with the web librarian to outline projects to gather more evidence that would guide their action. They used a variety of research methods to assess the website: X/O tests to allocate valuable home page real estate to the services and features of most interest to users; card sorting to design a more understandable website navigation; usability testing to evaluate whether common tasks could be performed easily; heuristic evaluations of frequently used webpages to see if they conformed to accepted usability principles; A/B tests to compare different design prototypes; and subsequent surveys to re-evaluate the modifications. By the triangulation of several data sources, they made informed decisions about how to improve the website. As an initial step, LibQUAL does not offer specific answers, but suggests potential directions for further study. This paper describes ways to iteratively test the UX of a website using several complementary methods following an exploratory survey. These strategies extend the value of survey results, making assessments more effective and practical. This pattern can be used not only for a website but for evaluating other services. Introduction In 2011, the Texas Tech University (TTU) Libraries conducted the LibQUAL survey. After receiving the results, the library dean made the comment, “LibQUAL results bring more questions than answers.” At that time, the results were not well disseminated beyond administration, and limited action was taken in response to the survey. In 2017, under a different dean and with a newly-formed user experience (UX) department, the TTU Libraries opted to conduct LibQUAL again. They used a census and received 3,631 valid surveys—a sizable increase over the 584 received in 2011 when they used a sampling method. Participants came from all the sub-groups defined by LibQUAL, and their subject areas covered all the disciplines the university offered. In addition, 1,433 participants shared comments and suggestions about the libraries’ services and resources. The libraries wanted the LibQUAL findings to have a greater impact on services, resources, and spaces than they had in 2011—how to interpret the results, how to share them, and how to make improvements became a challenge for the libraries and the UX department. Comment Coding and Data Revisualization Reviewing best practices for interpreting LibQUAL results was a useful starting point, such as the report, “Libraries Act on Their LibQUAL+ Findings: From Data to Action.”1 There were a few presentations that focused on the practical aspects of analysis: “Analysis and Interpretation of the LibQUAL Results,”2 and “It’s Not about You! Using LibQUAL at Brown University and at the University of Connecticut Libraries.”3 The latter was informative about the importance of coding comment data. Both presentations gave useful guidance on interpreting charts and understanding zones of tolerance. They provided instructions for identifying what was actionable from the results by cross-tabulating desired and adequacy mean scores in order to determine what users rated as most needed but least adequate. Organizing the open-ended comments was a key part of the analysis process. The LibQUAL email list was tremendously helpful at this stage as other librarians discussed their strategies for reviewing comments, such as using Brown University’s 2005 Codebook4 as a guide, using an emergent coding strategy,5 and developing a “comments slicer” in Excel.6 The initial work was completed by the UX department’s library associate (a\",\"PeriodicalId\":193553,\"journal\":{\"name\":\"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX\",\"volume\":\"57 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.29242/lac.2018.34\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.29242/lac.2018.34","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

2017年,德克萨斯理工大学图书馆进行了LibQUAL调查。在收到调查结果后,图书馆有许多未回答的问题——下一步该怎么办?问题在哪里?应该首先解决哪些问题?该网站被认为是一个值得进一步研究的课题。用户体验(UX)部门与网络图书管理员合作制定项目大纲,以收集更多的证据来指导他们的行动。他们使用了多种研究方法来评估网站:X/O测试将有价值的主页空间分配给用户最感兴趣的服务和功能;卡片分类设计更容易理解的网站导航;可用性测试,评估普通任务是否容易执行;对经常使用的网页进行启发式评估,看看它们是否符合公认的可用性原则;比较不同设计原型的A/B测试;以及随后的调查来重新评估这些修改。通过对几个数据源进行三角测量,他们就如何改进网站做出了明智的决定。作为第一步,LibQUAL不提供具体的答案,但提出了进一步研究的潜在方向。本文描述了在探索性调查之后使用几种互补方法迭代测试网站用户体验的方法。这些策略扩展了调查结果的价值,使评估更加有效和实用。这种模式不仅可以用于网站,还可以用于评估其他服务。2011年,德克萨斯理工大学(TTU)图书馆进行了LibQUAL调查。收到结果后,图书馆馆长评论说:“LibQUAL结果带来的问题比答案更多。”当时,调查结果没有很好地传播到行政部门以外,对调查采取了有限的行动。2017年,在不同的院长和新成立的用户体验(UX)部门的领导下,TTU图书馆再次选择进行LibQUAL。他们采用了人口普查的方法,收到了3631份有效的调查,比2011年采用抽样方法收到的584份有了相当大的增长。参与者来自LibQUAL定义的所有子小组,他们的学科领域涵盖了大学提供的所有学科。此外,有1,433名参加者就图书馆的服务和资源提出意见和建议。图书馆希望LibQUAL的调查结果对服务、资源和空间产生比2011年更大的影响——如何解释结果、如何共享结果以及如何进行改进,这些都成为图书馆和用户体验部门面临的挑战。回顾解释LibQUAL结果的最佳实践是一个有用的起点,例如报告“图书馆根据LibQUAL+发现采取行动:从数据到行动”。有几个演讲集中在分析的实际方面:“LibQUAL结果的分析和解释”,2和“这不是关于你!”在布朗大学和康涅狄格大学图书馆使用LibQUAL。3后者提供了关于编码注释数据的重要性的信息。两份报告都对解释图表和理解容忍区提供了有用的指导。他们提供了指导,通过交叉列出期望和适当的平均分数,从结果中确定哪些是可操作的,以便确定用户认为最需要但最不充分的内容。组织开放式评论是分析过程的关键部分。LibQUAL的邮件列表在这个阶段非常有帮助,因为其他图书馆员讨论了他们审查评论的策略,例如使用布朗大学2005年的Codebook4作为指南,使用紧急编码策略5,以及在excel中开发“评论切块器”
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
LibQUAL+ Results Bring More Questions Than Answers
The Texas Tech University Libraries conducted the LibQUAL survey in 2017. After receiving the survey results, the libraries had many unanswered questions—what is the next step? What are the problem areas? Which problems should be addressed first? The website was identified as a topic that merited further study. The user experience (UX) department collaborated with the web librarian to outline projects to gather more evidence that would guide their action. They used a variety of research methods to assess the website: X/O tests to allocate valuable home page real estate to the services and features of most interest to users; card sorting to design a more understandable website navigation; usability testing to evaluate whether common tasks could be performed easily; heuristic evaluations of frequently used webpages to see if they conformed to accepted usability principles; A/B tests to compare different design prototypes; and subsequent surveys to re-evaluate the modifications. By the triangulation of several data sources, they made informed decisions about how to improve the website. As an initial step, LibQUAL does not offer specific answers, but suggests potential directions for further study. This paper describes ways to iteratively test the UX of a website using several complementary methods following an exploratory survey. These strategies extend the value of survey results, making assessments more effective and practical. This pattern can be used not only for a website but for evaluating other services. Introduction In 2011, the Texas Tech University (TTU) Libraries conducted the LibQUAL survey. After receiving the results, the library dean made the comment, “LibQUAL results bring more questions than answers.” At that time, the results were not well disseminated beyond administration, and limited action was taken in response to the survey. In 2017, under a different dean and with a newly-formed user experience (UX) department, the TTU Libraries opted to conduct LibQUAL again. They used a census and received 3,631 valid surveys—a sizable increase over the 584 received in 2011 when they used a sampling method. Participants came from all the sub-groups defined by LibQUAL, and their subject areas covered all the disciplines the university offered. In addition, 1,433 participants shared comments and suggestions about the libraries’ services and resources. The libraries wanted the LibQUAL findings to have a greater impact on services, resources, and spaces than they had in 2011—how to interpret the results, how to share them, and how to make improvements became a challenge for the libraries and the UX department. Comment Coding and Data Revisualization Reviewing best practices for interpreting LibQUAL results was a useful starting point, such as the report, “Libraries Act on Their LibQUAL+ Findings: From Data to Action.”1 There were a few presentations that focused on the practical aspects of analysis: “Analysis and Interpretation of the LibQUAL Results,”2 and “It’s Not about You! Using LibQUAL at Brown University and at the University of Connecticut Libraries.”3 The latter was informative about the importance of coding comment data. Both presentations gave useful guidance on interpreting charts and understanding zones of tolerance. They provided instructions for identifying what was actionable from the results by cross-tabulating desired and adequacy mean scores in order to determine what users rated as most needed but least adequate. Organizing the open-ended comments was a key part of the analysis process. The LibQUAL email list was tremendously helpful at this stage as other librarians discussed their strategies for reviewing comments, such as using Brown University’s 2005 Codebook4 as a guide, using an emergent coding strategy,5 and developing a “comments slicer” in Excel.6 The initial work was completed by the UX department’s library associate (a
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Library Design: How Many Seats Do We Need? Engaging Graduate Students in Research and Scholarly Life Cycle Practices: Localized Modeling of Scholarly Communication for Alignment with Strategic Initiatives Reflections on Creating a Multi-Site, Mixed Methods, and Interpretive Assessment Project Choose Your Adventure: A Library Reorganization Case Study Collecting Globally, Connecting Locally: 21st Century Libraries
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1