{"title":"LibQUAL+结果带来的问题多于答案","authors":"Kimberly Vardeman, Jingjing Wu","doi":"10.29242/lac.2018.34","DOIUrl":null,"url":null,"abstract":"The Texas Tech University Libraries conducted the LibQUAL survey in 2017. After receiving the survey results, the libraries had many unanswered questions—what is the next step? What are the problem areas? Which problems should be addressed first? The website was identified as a topic that merited further study. The user experience (UX) department collaborated with the web librarian to outline projects to gather more evidence that would guide their action. They used a variety of research methods to assess the website: X/O tests to allocate valuable home page real estate to the services and features of most interest to users; card sorting to design a more understandable website navigation; usability testing to evaluate whether common tasks could be performed easily; heuristic evaluations of frequently used webpages to see if they conformed to accepted usability principles; A/B tests to compare different design prototypes; and subsequent surveys to re-evaluate the modifications. By the triangulation of several data sources, they made informed decisions about how to improve the website. As an initial step, LibQUAL does not offer specific answers, but suggests potential directions for further study. This paper describes ways to iteratively test the UX of a website using several complementary methods following an exploratory survey. These strategies extend the value of survey results, making assessments more effective and practical. This pattern can be used not only for a website but for evaluating other services. Introduction In 2011, the Texas Tech University (TTU) Libraries conducted the LibQUAL survey. After receiving the results, the library dean made the comment, “LibQUAL results bring more questions than answers.” At that time, the results were not well disseminated beyond administration, and limited action was taken in response to the survey. In 2017, under a different dean and with a newly-formed user experience (UX) department, the TTU Libraries opted to conduct LibQUAL again. They used a census and received 3,631 valid surveys—a sizable increase over the 584 received in 2011 when they used a sampling method. Participants came from all the sub-groups defined by LibQUAL, and their subject areas covered all the disciplines the university offered. In addition, 1,433 participants shared comments and suggestions about the libraries’ services and resources. The libraries wanted the LibQUAL findings to have a greater impact on services, resources, and spaces than they had in 2011—how to interpret the results, how to share them, and how to make improvements became a challenge for the libraries and the UX department. Comment Coding and Data Revisualization Reviewing best practices for interpreting LibQUAL results was a useful starting point, such as the report, “Libraries Act on Their LibQUAL+ Findings: From Data to Action.”1 There were a few presentations that focused on the practical aspects of analysis: “Analysis and Interpretation of the LibQUAL Results,”2 and “It’s Not about You! Using LibQUAL at Brown University and at the University of Connecticut Libraries.”3 The latter was informative about the importance of coding comment data. Both presentations gave useful guidance on interpreting charts and understanding zones of tolerance. They provided instructions for identifying what was actionable from the results by cross-tabulating desired and adequacy mean scores in order to determine what users rated as most needed but least adequate. Organizing the open-ended comments was a key part of the analysis process. The LibQUAL email list was tremendously helpful at this stage as other librarians discussed their strategies for reviewing comments, such as using Brown University’s 2005 Codebook4 as a guide, using an emergent coding strategy,5 and developing a “comments slicer” in Excel.6 The initial work was completed by the UX department’s library associate (a","PeriodicalId":193553,"journal":{"name":"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LibQUAL+ Results Bring More Questions Than Answers\",\"authors\":\"Kimberly Vardeman, Jingjing Wu\",\"doi\":\"10.29242/lac.2018.34\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Texas Tech University Libraries conducted the LibQUAL survey in 2017. After receiving the survey results, the libraries had many unanswered questions—what is the next step? What are the problem areas? Which problems should be addressed first? The website was identified as a topic that merited further study. The user experience (UX) department collaborated with the web librarian to outline projects to gather more evidence that would guide their action. They used a variety of research methods to assess the website: X/O tests to allocate valuable home page real estate to the services and features of most interest to users; card sorting to design a more understandable website navigation; usability testing to evaluate whether common tasks could be performed easily; heuristic evaluations of frequently used webpages to see if they conformed to accepted usability principles; A/B tests to compare different design prototypes; and subsequent surveys to re-evaluate the modifications. By the triangulation of several data sources, they made informed decisions about how to improve the website. As an initial step, LibQUAL does not offer specific answers, but suggests potential directions for further study. This paper describes ways to iteratively test the UX of a website using several complementary methods following an exploratory survey. These strategies extend the value of survey results, making assessments more effective and practical. This pattern can be used not only for a website but for evaluating other services. Introduction In 2011, the Texas Tech University (TTU) Libraries conducted the LibQUAL survey. After receiving the results, the library dean made the comment, “LibQUAL results bring more questions than answers.” At that time, the results were not well disseminated beyond administration, and limited action was taken in response to the survey. In 2017, under a different dean and with a newly-formed user experience (UX) department, the TTU Libraries opted to conduct LibQUAL again. They used a census and received 3,631 valid surveys—a sizable increase over the 584 received in 2011 when they used a sampling method. Participants came from all the sub-groups defined by LibQUAL, and their subject areas covered all the disciplines the university offered. In addition, 1,433 participants shared comments and suggestions about the libraries’ services and resources. The libraries wanted the LibQUAL findings to have a greater impact on services, resources, and spaces than they had in 2011—how to interpret the results, how to share them, and how to make improvements became a challenge for the libraries and the UX department. Comment Coding and Data Revisualization Reviewing best practices for interpreting LibQUAL results was a useful starting point, such as the report, “Libraries Act on Their LibQUAL+ Findings: From Data to Action.”1 There were a few presentations that focused on the practical aspects of analysis: “Analysis and Interpretation of the LibQUAL Results,”2 and “It’s Not about You! Using LibQUAL at Brown University and at the University of Connecticut Libraries.”3 The latter was informative about the importance of coding comment data. Both presentations gave useful guidance on interpreting charts and understanding zones of tolerance. They provided instructions for identifying what was actionable from the results by cross-tabulating desired and adequacy mean scores in order to determine what users rated as most needed but least adequate. Organizing the open-ended comments was a key part of the analysis process. The LibQUAL email list was tremendously helpful at this stage as other librarians discussed their strategies for reviewing comments, such as using Brown University’s 2005 Codebook4 as a guide, using an emergent coding strategy,5 and developing a “comments slicer” in Excel.6 The initial work was completed by the UX department’s library associate (a\",\"PeriodicalId\":193553,\"journal\":{\"name\":\"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX\",\"volume\":\"57 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.29242/lac.2018.34\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment: December 5–7, 2018, Houston, TX","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.29242/lac.2018.34","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LibQUAL+ Results Bring More Questions Than Answers
The Texas Tech University Libraries conducted the LibQUAL survey in 2017. After receiving the survey results, the libraries had many unanswered questions—what is the next step? What are the problem areas? Which problems should be addressed first? The website was identified as a topic that merited further study. The user experience (UX) department collaborated with the web librarian to outline projects to gather more evidence that would guide their action. They used a variety of research methods to assess the website: X/O tests to allocate valuable home page real estate to the services and features of most interest to users; card sorting to design a more understandable website navigation; usability testing to evaluate whether common tasks could be performed easily; heuristic evaluations of frequently used webpages to see if they conformed to accepted usability principles; A/B tests to compare different design prototypes; and subsequent surveys to re-evaluate the modifications. By the triangulation of several data sources, they made informed decisions about how to improve the website. As an initial step, LibQUAL does not offer specific answers, but suggests potential directions for further study. This paper describes ways to iteratively test the UX of a website using several complementary methods following an exploratory survey. These strategies extend the value of survey results, making assessments more effective and practical. This pattern can be used not only for a website but for evaluating other services. Introduction In 2011, the Texas Tech University (TTU) Libraries conducted the LibQUAL survey. After receiving the results, the library dean made the comment, “LibQUAL results bring more questions than answers.” At that time, the results were not well disseminated beyond administration, and limited action was taken in response to the survey. In 2017, under a different dean and with a newly-formed user experience (UX) department, the TTU Libraries opted to conduct LibQUAL again. They used a census and received 3,631 valid surveys—a sizable increase over the 584 received in 2011 when they used a sampling method. Participants came from all the sub-groups defined by LibQUAL, and their subject areas covered all the disciplines the university offered. In addition, 1,433 participants shared comments and suggestions about the libraries’ services and resources. The libraries wanted the LibQUAL findings to have a greater impact on services, resources, and spaces than they had in 2011—how to interpret the results, how to share them, and how to make improvements became a challenge for the libraries and the UX department. Comment Coding and Data Revisualization Reviewing best practices for interpreting LibQUAL results was a useful starting point, such as the report, “Libraries Act on Their LibQUAL+ Findings: From Data to Action.”1 There were a few presentations that focused on the practical aspects of analysis: “Analysis and Interpretation of the LibQUAL Results,”2 and “It’s Not about You! Using LibQUAL at Brown University and at the University of Connecticut Libraries.”3 The latter was informative about the importance of coding comment data. Both presentations gave useful guidance on interpreting charts and understanding zones of tolerance. They provided instructions for identifying what was actionable from the results by cross-tabulating desired and adequacy mean scores in order to determine what users rated as most needed but least adequate. Organizing the open-ended comments was a key part of the analysis process. The LibQUAL email list was tremendously helpful at this stage as other librarians discussed their strategies for reviewing comments, such as using Brown University’s 2005 Codebook4 as a guide, using an emergent coding strategy,5 and developing a “comments slicer” in Excel.6 The initial work was completed by the UX department’s library associate (a