Sheila V Kusnoor, Annette M Williams, Taneya Y Koonce, Poppy A Krump, Lori A Harding, Jerry Zhao, John D Clark, Nunzia B Giuse
Given the key role of systematic reviews in informing clinical decision making and guidelines, it is important for individuals to have equitable access to quality instructional materials on how to design, conduct, report, and evaluate systematic reviews. In response to this need, Vanderbilt University Medical Center's Center for Knowledge Management (CKM) created an open-access systematic review instructional video series. The educational content was created by experienced CKM information scientists, who worked together to adapt an internal training series that they had developed into a format that could be widely shared with the public. Brief videos, averaging 10 minutes in length, were created addressing essential concepts related to systematic reviews, including distinguishing between literature review types, understanding reasons for conducting a systematic review, designing a systematic review protocol, steps in conducting a systematic review, web-based tools to aid with the systematic review process, publishing a systematic review, and critically evaluating systematic reviews. Quiz questions were developed for each instructional video to allow learners to check their understanding of the material. The systematic review instructional video series launched on CKM's Scholarly Publishing Information Hub (SPI-Hub™) website in Fall 2023. From January through August 2024, there were 1,662 international accesses to the SPI-Hub™ systematic review website, representing 41 countries. Initial feedback, while primarily anecdotal, has been positive. By adapting its internal systematic review training into an online video series format suitable for asynchronous instruction, CKM has been able to widely disseminate its educational materials.
{"title":"Development of an open access systematic review instructional video series accessible through the SPI-Hub™ website.","authors":"Sheila V Kusnoor, Annette M Williams, Taneya Y Koonce, Poppy A Krump, Lori A Harding, Jerry Zhao, John D Clark, Nunzia B Giuse","doi":"10.5195/jmla.2025.2078","DOIUrl":"10.5195/jmla.2025.2078","url":null,"abstract":"<p><p>Given the key role of systematic reviews in informing clinical decision making and guidelines, it is important for individuals to have equitable access to quality instructional materials on how to design, conduct, report, and evaluate systematic reviews. In response to this need, Vanderbilt University Medical Center's Center for Knowledge Management (CKM) created an open-access systematic review instructional video series. The educational content was created by experienced CKM information scientists, who worked together to adapt an internal training series that they had developed into a format that could be widely shared with the public. Brief videos, averaging 10 minutes in length, were created addressing essential concepts related to systematic reviews, including distinguishing between literature review types, understanding reasons for conducting a systematic review, designing a systematic review protocol, steps in conducting a systematic review, web-based tools to aid with the systematic review process, publishing a systematic review, and critically evaluating systematic reviews. Quiz questions were developed for each instructional video to allow learners to check their understanding of the material. The systematic review instructional video series launched on CKM's Scholarly Publishing Information Hub (SPI-Hub™) website in Fall 2023. From January through August 2024, there were 1,662 international accesses to the SPI-Hub™ systematic review website, representing 41 countries. Initial feedback, while primarily anecdotal, has been positive. By adapting its internal systematic review training into an online video series format suitable for asynchronous instruction, CKM has been able to widely disseminate its educational materials.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"98-100"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This project investigated the potential of generative AI models in aiding health sciences librarians with collection development. Researchers at Chapman University's Harry and Diane Rinker Health Science campus evaluated four generative AI models-ChatGPT 4.0, Google Gemini, Perplexity, and Microsoft Copilot-over six months starting in March 2024. Two prompts were used: one to generate recent eBook titles in specific health sciences fields and another to identify subject gaps in the existing collection. The first prompt revealed inconsistencies across models, with Copilot and Perplexity providing sources but also inaccuracies. The second prompt yielded more useful results, with all models offering helpful analysis and accurate Library of Congress call numbers. The findings suggest that Large Language Models (LLMs) are not yet reliable as primary tools for collection development due to inaccuracies and hallucinations. However, they can serve as supplementary tools for analyzing subject coverage and identifying gaps in health sciences collections.
{"title":"Making the most of Artificial Intelligence and Large Language Models to support collection development in health sciences libraries.","authors":"Ivan Portillo, David Carson","doi":"10.5195/jmla.2025.2079","DOIUrl":"10.5195/jmla.2025.2079","url":null,"abstract":"<p><p>This project investigated the potential of generative AI models in aiding health sciences librarians with collection development. Researchers at Chapman University's Harry and Diane Rinker Health Science campus evaluated four generative AI models-ChatGPT 4.0, Google Gemini, Perplexity, and Microsoft Copilot-over six months starting in March 2024. Two prompts were used: one to generate recent eBook titles in specific health sciences fields and another to identify subject gaps in the existing collection. The first prompt revealed inconsistencies across models, with Copilot and Perplexity providing sources but also inaccuracies. The second prompt yielded more useful results, with all models offering helpful analysis and accurate Library of Congress call numbers. The findings suggest that Large Language Models (LLMs) are not yet reliable as primary tools for collection development due to inaccuracies and hallucinations. However, they can serve as supplementary tools for analyzing subject coverage and identifying gaps in health sciences collections.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"92-93"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835035/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beginning in 2012, the Virtual Projects section of the Journal of the Medical Library Association has provided an opportunity for library leaders and technology experts to share with others how new technologies are being adopted by health sciences libraries. From educational purposes to online tools that enhance library services or access to resources, the Virtual Projects section brings technology use examples to the forefront. The new publication issue for future Virtual Projects sections will be January and the call for submissions and Virtual Projects deadline will now take place in June and July.
{"title":"<i>JMLA</i> virtual projects continue to show impact of technologies in health sciences libraries.","authors":"Emily Hurst","doi":"10.5195/jmla.2025.2102","DOIUrl":"10.5195/jmla.2025.2102","url":null,"abstract":"<p><p>Beginning in 2012, the Virtual Projects section of the <i>Journal of the Medical Library Association</i> has provided an opportunity for library leaders and technology experts to share with others how new technologies are being adopted by health sciences libraries. From educational purposes to online tools that enhance library services or access to resources, the Virtual Projects section brings technology use examples to the forefront. The new publication issue for future Virtual Projects sections will be January and the call for submissions and Virtual Projects deadline will now take place in June and July.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"85"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mallory N Blasingame, Taneya Y Koonce, Annette M Williams, Dario A Giuse, Jing Su, Poppy A Krump, Nunzia Bettinsoli Giuse
Objective: This study investigated the performance of a generative artificial intelligence (AI) tool using GPT-4 in answering clinical questions in comparison with medical librarians' gold-standard evidence syntheses.
Methods: Questions were extracted from an in-house database of clinical evidence requests previously answered by medical librarians. Questions with multiple parts were subdivided into individual topics. A standardized prompt was developed using the COSTAR framework. Librarians submitted each question into aiChat, an internally managed chat tool using GPT-4, and recorded the responses. The summaries generated by aiChat were evaluated on whether they contained the critical elements used in the established gold-standard summary of the librarian. A subset of questions was randomly selected for verification of references provided by aiChat.
Results: Of the 216 evaluated questions, aiChat's response was assessed as "correct" for 180 (83.3%) questions, "partially correct" for 35 (16.2%) questions, and "incorrect" for 1 (0.5%) question. No significant differences were observed in question ratings by question category (p=0.73). For a subset of 30% (n=66) of questions, 162 references were provided in the aiChat summaries, and 60 (37%) were confirmed as nonfabricated.
Conclusions: Overall, the performance of a generative AI tool was promising. However, many included references could not be independently verified, and attempts were not made to assess whether any additional concepts introduced by aiChat were factually accurate. Thus, we envision this being the first of a series of investigations designed to further our understanding of how current and future versions of generative AI can be used and integrated into medical librarians' workflow.
{"title":"Evaluating a large language model's ability to answer clinicians' requests for evidence summaries.","authors":"Mallory N Blasingame, Taneya Y Koonce, Annette M Williams, Dario A Giuse, Jing Su, Poppy A Krump, Nunzia Bettinsoli Giuse","doi":"10.5195/jmla.2025.1985","DOIUrl":"10.5195/jmla.2025.1985","url":null,"abstract":"<p><strong>Objective: </strong>This study investigated the performance of a generative artificial intelligence (AI) tool using GPT-4 in answering clinical questions in comparison with medical librarians' gold-standard evidence syntheses.</p><p><strong>Methods: </strong>Questions were extracted from an in-house database of clinical evidence requests previously answered by medical librarians. Questions with multiple parts were subdivided into individual topics. A standardized prompt was developed using the COSTAR framework. Librarians submitted each question into aiChat, an internally managed chat tool using GPT-4, and recorded the responses. The summaries generated by aiChat were evaluated on whether they contained the critical elements used in the established gold-standard summary of the librarian. A subset of questions was randomly selected for verification of references provided by aiChat.</p><p><strong>Results: </strong>Of the 216 evaluated questions, aiChat's response was assessed as \"correct\" for 180 (83.3%) questions, \"partially correct\" for 35 (16.2%) questions, and \"incorrect\" for 1 (0.5%) question. No significant differences were observed in question ratings by question category (p=0.73). For a subset of 30% (n=66) of questions, 162 references were provided in the aiChat summaries, and 60 (37%) were confirmed as nonfabricated.</p><p><strong>Conclusions: </strong>Overall, the performance of a generative AI tool was promising. However, many included references could not be independently verified, and attempts were not made to assess whether any additional concepts introduced by aiChat were factually accurate. Thus, we envision this being the first of a series of investigations designed to further our understanding of how current and future versions of generative AI can be used and integrated into medical librarians' workflow.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"65-77"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835037/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer Horton, David Kaunelis, Danielle Rabb, Andrea Smith
Objective: This study investigates the effectiveness of bibliographic databases to retrieve qualitative studies for use in systematic and rapid reviews in Health Technology Assessment (HTA) research. Qualitative research is becoming more prevalent in reviews and health technology assessment, but standardized search methodologies-particularly regarding database selection-are still in development.
Methods: To determine how commonly used databases (MEDLINE, CINAHL, PsycINFO, Scopus, and Web of Science) perform, a comprehensive list of relevant journal titles was compiled using InCites Journal Citation Reports and validated by qualitative researchers at Canada's Drug Agency (formerly CADTH). This list was used to evaluate the qualitative holdings of each database, by calculating the percentage of total titles held in each database, as well as the number of unique titles per database.
Results: While publications on qualitative search methodology generally recommend subject-specific health databases including MEDLINE, CINAHL, and PsycINFO, this study found that multidisciplinary citation indexes Scopus and Web of Science Core Collection not only had the highest percentages of total titles held, but also a higher number of unique titles.
Conclusions: These indexes have potential utility in qualitative search strategies, if only for supplementing other database searches with unique records. This potential was investigated via tests on qualitative rapid review search strategies translated to Scopus to determine how the index may contribute relevant literature.
目的:探讨书目数据库在卫生技术评价(HTA)研究中检索定性研究的有效性。定性研究在综述和卫生技术评估中变得越来越普遍,但是标准化的搜索方法——特别是关于数据库选择的方法——仍在发展中。方法:为了确定常用数据库(MEDLINE, CINAHL, PsycINFO, Scopus和Web of Science)的表现,使用InCites期刊引文报告编制了相关期刊标题的综合列表,并由加拿大药品管理局(前身为CADTH)的定性研究人员进行了验证。通过计算每个数据库中所持有的图书总数的百分比以及每个数据库中唯一图书的数量,该列表用于评估每个数据库的定性藏书。结果:虽然采用定性搜索方法的出版物通常推荐特定学科的健康数据库,包括MEDLINE、CINAHL和PsycINFO,但本研究发现,多学科引文索引Scopus和Web of Science Core Collection不仅拥有最高的总标题百分比,而且拥有更多的独特标题。结论:这些索引在定性搜索策略中具有潜在的效用,如果仅仅用于补充其他具有唯一记录的数据库搜索。通过对翻译到Scopus的定性快速回顾搜索策略的测试,研究了这种潜力,以确定该索引如何贡献相关文献。
{"title":"What's beyond the core? Database coverage in qualitative information retrieval.","authors":"Jennifer Horton, David Kaunelis, Danielle Rabb, Andrea Smith","doi":"10.5195/jmla.2025.1591","DOIUrl":"10.5195/jmla.2025.1591","url":null,"abstract":"<p><strong>Objective: </strong>This study investigates the effectiveness of bibliographic databases to retrieve qualitative studies for use in systematic and rapid reviews in Health Technology Assessment (HTA) research. Qualitative research is becoming more prevalent in reviews and health technology assessment, but standardized search methodologies-particularly regarding database selection-are still in development.</p><p><strong>Methods: </strong>To determine how commonly used databases (MEDLINE, CINAHL, PsycINFO, Scopus, and Web of Science) perform, a comprehensive list of relevant journal titles was compiled using InCites Journal Citation Reports and validated by qualitative researchers at Canada's Drug Agency (formerly CADTH). This list was used to evaluate the qualitative holdings of each database, by calculating the percentage of total titles held in each database, as well as the number of unique titles per database.</p><p><strong>Results: </strong>While publications on qualitative search methodology generally recommend subject-specific health databases including MEDLINE, CINAHL, and PsycINFO, this study found that multidisciplinary citation indexes Scopus and Web of Science Core Collection not only had the highest percentages of total titles held, but also a higher number of unique titles.</p><p><strong>Conclusions: </strong>These indexes have potential utility in qualitative search strategies, if only for supplementing other database searches with unique records. This potential was investigated via tests on qualitative rapid review search strategies translated to Scopus to determine how the index may contribute relevant literature.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"49-57"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835044/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashleigh J Rich, Emma L McGorray, Carrie Baldwin-SoRelle, Michelle Cawley, Karen Grigg, Lauren B Beach, Gregory Phillips, Tonia Poteat
Objective: Sexual and gender minority (SGM) populations experience health disparities compared to heterosexual and cisgender populations. The development of accurate, comprehensive sexual orientation and gender identity (SOGI) measures is fundamental to quantify and address SGM disparities, which first requires identifying SOGI-related research. As part of a larger project reviewing and synthesizing how SOGI has been assessed within the health literature, we provide an example of the application of automated tools for systematic reviews to the area of SOGI measurement.
Methods: In collaboration with research librarians, a three-phase approach was used to prioritize screening for a set of 11,441 SOGI measurement studies published since 2012. In Phase 1, search results were stratified into two groups (title with vs. without measurement-related terms); titles with measurement-related terms were manually screened. In Phase 2, supervised clustering using DoCTER software was used to sort the remaining studies based on relevance. In Phase 3, supervised machine learning using DoCTER was used to further identify which studies deemed low relevance in Phase 2 should be prioritized for manual screening.
Results: 1,607 studies were identified in Phase 1. Across Phases 2 and 3, the research team excluded 5,056 of the remaining 9,834 studies using DoCTER. In manual review, the percentage of relevant studies in results screened manually was low, ranging from 0.1 to 7.8 percent.
Conclusions: Automated tools used in collaboration with research librarians have the potential to save hundreds of hours of human labor in large-scale systematic reviews of SGM health research.
{"title":"Automated tools for systematic review screening methods: an application of machine learning for sexual orientation and gender identity measurement in health research.","authors":"Ashleigh J Rich, Emma L McGorray, Carrie Baldwin-SoRelle, Michelle Cawley, Karen Grigg, Lauren B Beach, Gregory Phillips, Tonia Poteat","doi":"10.5195/jmla.2025.1860","DOIUrl":"10.5195/jmla.2025.1860","url":null,"abstract":"<p><strong>Objective: </strong>Sexual and gender minority (SGM) populations experience health disparities compared to heterosexual and cisgender populations. The development of accurate, comprehensive sexual orientation and gender identity (SOGI) measures is fundamental to quantify and address SGM disparities, which first requires identifying SOGI-related research. As part of a larger project reviewing and synthesizing how SOGI has been assessed within the health literature, we provide an example of the application of automated tools for systematic reviews to the area of SOGI measurement.</p><p><strong>Methods: </strong>In collaboration with research librarians, a three-phase approach was used to prioritize screening for a set of 11,441 SOGI measurement studies published since 2012. In Phase 1, search results were stratified into two groups (title with vs. without measurement-related terms); titles with measurement-related terms were manually screened. In Phase 2, supervised clustering using DoCTER software was used to sort the remaining studies based on relevance. In Phase 3, supervised machine learning using DoCTER was used to further identify which studies deemed low relevance in Phase 2 should be prioritized for manual screening.</p><p><strong>Results: </strong>1,607 studies were identified in Phase 1. Across Phases 2 and 3, the research team excluded 5,056 of the remaining 9,834 studies using DoCTER. In manual review, the percentage of relevant studies in results screened manually was low, ranging from 0.1 to 7.8 percent.</p><p><strong>Conclusions: </strong>Automated tools used in collaboration with research librarians have the potential to save hundreds of hours of human labor in large-scale systematic reviews of SGM health research.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"31-38"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835039/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jill T Boruff, Michelle Kraft, Alexander J Carroll
In the April 2019 issue (Vol. 106 No. 3), the Journal of the Medical Library Association (JMLA) debuted its Case Report publication category. In the years following this decision, the Case Reports category has grown into an integral component of JMLA. In this editorial, the JMLA Editorial Team highlights the value of case reports and outlines strategies authors can use to draft impactful manuscripts for this category.
{"title":"Revisiting <i>JMLA</i> case reports: a publication category for driving innovation in health sciences librarianship.","authors":"Jill T Boruff, Michelle Kraft, Alexander J Carroll","doi":"10.5195/jmla.2025.2099","DOIUrl":"10.5195/jmla.2025.2099","url":null,"abstract":"<p><p>In the April 2019 issue (Vol. 106 No. 3), the <i>Journal of the Medical Library Association (JMLA)</i> debuted its Case Report publication category. In the years following this decision, the Case Reports category has grown into an integral component of <i>JMLA</i>. In this editorial, the <i>JMLA</i> Editorial Team highlights the value of case reports and outlines strategies authors can use to draft impactful manuscripts for this category.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"1-3"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835043/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prompted by increasing requests for assistance with research evaluation from faculty researchers and university leadership, faculty librarians at the University of Tennessee Health Science Center (UTHSC) launched an innovative Research Impact Challenge in 2023. This Challenge was inspired by the University of Michigan's model and tailored to the needs of health sciences researchers. This asynchronous event aimed to empower early-career researchers and faculty seeking promotion and tenure by enhancing their online scholarly presence and understanding of how scholarship is tracked and evaluated. A team of diverse experts crafted an engaging learning experience through the strategic use of technology and design. Scribe slideshows and videos offered dynamic instruction, while written content and worksheets facilitated engagement and reflection. The Research Impact Challenge LibGuide, expertly designed with HTML and CSS, served as the central platform, ensuring intuitive navigation and easy access to resources (https://libguides.uthsc.edu/impactchallenge). User interface design prioritized simplicity and accessibility, accommodating diverse learning preferences and technical skills. This innovative project addressed common challenges faced by researchers and demonstrated the impactful use of technology in creating an adaptable and inclusive educational experience. The Research Impact Challenge exemplifies how academic libraries can harness technology to foster scholarly growth and support research impact in the health sciences.
{"title":"Designing for impact: a case study of UTHSC's research impact challenge.","authors":"Jess Newman McDonald, Annabelle L Holt","doi":"10.5195/jmla.2025.2085","DOIUrl":"10.5195/jmla.2025.2085","url":null,"abstract":"<p><p>Prompted by increasing requests for assistance with research evaluation from faculty researchers and university leadership, faculty librarians at the University of Tennessee Health Science Center (UTHSC) launched an innovative Research Impact Challenge in 2023. This Challenge was inspired by the University of Michigan's model and tailored to the needs of health sciences researchers. This asynchronous event aimed to empower early-career researchers and faculty seeking promotion and tenure by enhancing their online scholarly presence and understanding of how scholarship is tracked and evaluated. A team of diverse experts crafted an engaging learning experience through the strategic use of technology and design. Scribe slideshows and videos offered dynamic instruction, while written content and worksheets facilitated engagement and reflection. The Research Impact Challenge LibGuide, expertly designed with HTML and CSS, served as the central platform, ensuring intuitive navigation and easy access to resources (https://libguides.uthsc.edu/impactchallenge). User interface design prioritized simplicity and accessibility, accommodating diverse learning preferences and technical skills. This innovative project addressed common challenges faced by researchers and demonstrated the impactful use of technology in creating an adaptable and inclusive educational experience. The Research Impact Challenge exemplifies how academic libraries can harness technology to foster scholarly growth and support research impact in the health sciences.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"90-91"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835040/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John W Cyrus, Laura Zeigen, Molly Knapp, Amy E Blevins, Brandon Patterson
Objective: A scoping review was undertaken to understand the extent of literature on librarian involvement in competency-based medical education (CBME).
Methods: We followed Joanna Briggs Institute methodology and PRISMA-ScR reporting guidelines. A search of peer-reviewed literature was conducted on December 31, 2022, in Medline, Embase, ERIC, CINAHL Complete, SCOPUS, LISS, LLIS, and LISTA. Studies were included if they described librarian involvement in the planning, delivery, or assessment of CBME in an LCME-accredited medical school and were published in English. Outcomes included characteristics of the inventions (duration, librarian role, content covered) and of the outcomes and measures (level on Kirkpatrick Model of Training Evaluation, direction of findings, measure used).
Results: Fifty studies were included of 11,051 screened: 46 empirical studies or program evaluations and four literature reviews. Studies were published in eight journals with two-thirds published after 2010. Duration of the intervention ranged from 30 minutes to a semester long. Librarians served as collaborators, leaders, curriculum designers, and evaluators. Studies primarily covered asking clinical questions and finding information and most often assessed reaction or learning outcomes.
Conclusions: A solid base of literature on librarian involvement in CBME exists; however, few studies measure user behavior or use validated outcomes measures. When librarians are communicating their value to stakeholders, having evidence for the contributions of librarians is essential. Existing publications may not capture the extent of work done in this area. Additional research is needed to quantify the impact of librarian involvement in competency-based medical education.
{"title":"A scoping review of librarian involvement in competency-based medical education.","authors":"John W Cyrus, Laura Zeigen, Molly Knapp, Amy E Blevins, Brandon Patterson","doi":"10.5195/jmla.2025.1965","DOIUrl":"10.5195/jmla.2025.1965","url":null,"abstract":"<p><strong>Objective: </strong>A scoping review was undertaken to understand the extent of literature on librarian involvement in competency-based medical education (CBME).</p><p><strong>Methods: </strong>We followed Joanna Briggs Institute methodology and PRISMA-ScR reporting guidelines. A search of peer-reviewed literature was conducted on December 31, 2022, in Medline, Embase, ERIC, CINAHL Complete, SCOPUS, LISS, LLIS, and LISTA. Studies were included if they described librarian involvement in the planning, delivery, or assessment of CBME in an LCME-accredited medical school and were published in English. Outcomes included characteristics of the inventions (duration, librarian role, content covered) and of the outcomes and measures (level on Kirkpatrick Model of Training Evaluation, direction of findings, measure used).</p><p><strong>Results: </strong>Fifty studies were included of 11,051 screened: 46 empirical studies or program evaluations and four literature reviews. Studies were published in eight journals with two-thirds published after 2010. Duration of the intervention ranged from 30 minutes to a semester long. Librarians served as collaborators, leaders, curriculum designers, and evaluators. Studies primarily covered asking clinical questions and finding information and most often assessed reaction or learning outcomes.</p><p><strong>Conclusions: </strong>A solid base of literature on librarian involvement in CBME exists; however, few studies measure user behavior or use validated outcomes measures. When librarians are communicating their value to stakeholders, having evidence for the contributions of librarians is essential. Existing publications may not capture the extent of work done in this area. Additional research is needed to quantify the impact of librarian involvement in competency-based medical education.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"9-23"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835034/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: Use of the search filter 'exp animals/not humans.sh' is a well-established method in evidence synthesis to exclude non-human studies. However, the shift to automated indexing of Medline records has raised concerns about the use of subject-heading-based search techniques. We sought to determine how often this string inappropriately excludes human studies among automated as compared to manually indexed records in Ovid Medline.
Methods: We searched Ovid Medline for studies published in 2021 and 2022 using the Cochrane Highly Sensitive Search Strategy for randomized trials. We identified all results excluded by the non-human-studies filter. Records were divided into sets based on indexing method: automated, curated, or manual. Each set was screened to identify human studies.
Results: Human studies were incorrectly excluded in all three conditions, but automated indexing inappropriately excluded human studies at nearly double the rate as manual indexing. In looking specifically at human clinical randomized controlled trials (RCTs), the rate of inappropriate exclusion of automated-indexing records was seven times that of manually-indexed records.
Conclusions: Given our findings, searchers are advised to carefully review the effect of the 'exp animals/not humans.sh' search filter on their search results, pending improvements to the automated indexing process.
目的:使用搜索过滤器“exp animals/not human .sh”是证据合成中排除非人类研究的一种行之有效的方法。然而,Medline记录向自动索引的转变引起了人们对使用基于主题标题的搜索技术的担忧。我们试图确定与Ovid Medline中手动索引的记录相比,该字符串在自动索引中不适当地排除人类研究的频率。方法:我们使用Cochrane随机试验高敏感搜索策略检索Ovid Medline在2021年和2022年发表的研究。我们确定了所有被非人类研究过滤器排除的结果。根据索引方法,将记录分为自动、策划或手动三种。每一组都经过筛选以确定人类研究。结果:在所有三种情况下,人类研究都被错误地排除,但自动索引不适当地排除人类研究的比率几乎是手动索引的两倍。在人类临床随机对照试验(rct)中,自动索引记录的不适当排除率是手动索引记录的7倍。结论:鉴于我们的发现,建议搜索者仔细审查“exp animals/not human .sh”搜索过滤器对搜索结果的影响,等待自动索引过程的改进。
{"title":"Filtering failure: the impact of automated indexing in Medline on retrieval of human studies for knowledge synthesis.","authors":"Nicole Askin, Tyler Ostapyk, Carla Epp","doi":"10.5195/jmla.2025.1972","DOIUrl":"10.5195/jmla.2025.1972","url":null,"abstract":"<p><strong>Objective: </strong>Use of the search filter 'exp animals/not humans.sh' is a well-established method in evidence synthesis to exclude non-human studies. However, the shift to automated indexing of Medline records has raised concerns about the use of subject-heading-based search techniques. We sought to determine how often this string inappropriately excludes human studies among automated as compared to manually indexed records in Ovid Medline.</p><p><strong>Methods: </strong>We searched Ovid Medline for studies published in 2021 and 2022 using the Cochrane Highly Sensitive Search Strategy for randomized trials. We identified all results excluded by the non-human-studies filter. Records were divided into sets based on indexing method: automated, curated, or manual. Each set was screened to identify human studies.</p><p><strong>Results: </strong>Human studies were incorrectly excluded in all three conditions, but automated indexing inappropriately excluded human studies at nearly double the rate as manual indexing. In looking specifically at human clinical randomized controlled trials (RCTs), the rate of inappropriate exclusion of automated-indexing records was seven times that of manually-indexed records.</p><p><strong>Conclusions: </strong>Given our findings, searchers are advised to carefully review the effect of the 'exp animals/not humans.sh' search filter on their search results, pending improvements to the automated indexing process.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"58-64"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835038/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}