ChatGPT (version 4.0, March 14, 2024). OpenAI, San Francisco, CA, USA. https://chat.openai.com; free and subscription plans available. Python (version 3.12.1, October 2, 2024). Python Software Foundation, Beaverton, OR, USA. https://www.python.org; free, open-source. Microsoft Excel (version 365). Microsoft Corporation, Redmond, WA, USA. https://www.microsoft.com/excel; proprietary software, subscription-based.
{"title":"ChatGPT, Python, and Microsoft Excel.","authors":"Kaique Sbampato, Humberto Arruda, Édison Renato Silva","doi":"10.5195/jmla.2025.2065","DOIUrl":"10.5195/jmla.2025.2065","url":null,"abstract":"<p><p><b>ChatGPT (version 4.0, March 14, 2024).</b> OpenAI, San Francisco, CA, USA. https://chat.openai.com; free and subscription plans available. <b>Python (version 3.12.1, October 2, 2024).</b> Python Software Foundation, Beaverton, OR, USA. https://www.python.org; free, open-source. <b>Microsoft Excel (version 365).</b> Microsoft Corporation, Redmond, WA, USA. https://www.microsoft.com/excel; proprietary software, subscription-based.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"110-112"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835046/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Health sciences and hospital libraries often face challenges in planning and organizing events due to limited resources and staff. At Stanford School of Medicine's Lane Library, librarians turned to artificial intelligence (AI) tools to address this issue and successfully manage various events, from small workshops to larger, more complex conferences. This article presents a case study on how to effectively integrate generative AI tools into the event planning process, improving efficiency and freeing staff to focus on higher-level tasks.
{"title":"Leveraging AI tools for streamlined library event planning: a case study from Lane Medical Library.","authors":"Boglarka Huddleston, Colleen Cuddy","doi":"10.5195/jmla.2025.2087","DOIUrl":"10.5195/jmla.2025.2087","url":null,"abstract":"<p><p>Health sciences and hospital libraries often face challenges in planning and organizing events due to limited resources and staff. At Stanford School of Medicine's Lane Library, librarians turned to artificial intelligence (AI) tools to address this issue and successfully manage various events, from small workshops to larger, more complex conferences. This article presents a case study on how to effectively integrate generative AI tools into the event planning process, improving efficiency and freeing staff to focus on higher-level tasks.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"88-89"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835050/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sheila V Kusnoor, Annette M Williams, Taneya Y Koonce, Poppy A Krump, Lori A Harding, Jerry Zhao, John D Clark, Nunzia B Giuse
Given the key role of systematic reviews in informing clinical decision making and guidelines, it is important for individuals to have equitable access to quality instructional materials on how to design, conduct, report, and evaluate systematic reviews. In response to this need, Vanderbilt University Medical Center's Center for Knowledge Management (CKM) created an open-access systematic review instructional video series. The educational content was created by experienced CKM information scientists, who worked together to adapt an internal training series that they had developed into a format that could be widely shared with the public. Brief videos, averaging 10 minutes in length, were created addressing essential concepts related to systematic reviews, including distinguishing between literature review types, understanding reasons for conducting a systematic review, designing a systematic review protocol, steps in conducting a systematic review, web-based tools to aid with the systematic review process, publishing a systematic review, and critically evaluating systematic reviews. Quiz questions were developed for each instructional video to allow learners to check their understanding of the material. The systematic review instructional video series launched on CKM's Scholarly Publishing Information Hub (SPI-Hub™) website in Fall 2023. From January through August 2024, there were 1,662 international accesses to the SPI-Hub™ systematic review website, representing 41 countries. Initial feedback, while primarily anecdotal, has been positive. By adapting its internal systematic review training into an online video series format suitable for asynchronous instruction, CKM has been able to widely disseminate its educational materials.
{"title":"Development of an open access systematic review instructional video series accessible through the SPI-Hub™ website.","authors":"Sheila V Kusnoor, Annette M Williams, Taneya Y Koonce, Poppy A Krump, Lori A Harding, Jerry Zhao, John D Clark, Nunzia B Giuse","doi":"10.5195/jmla.2025.2078","DOIUrl":"10.5195/jmla.2025.2078","url":null,"abstract":"<p><p>Given the key role of systematic reviews in informing clinical decision making and guidelines, it is important for individuals to have equitable access to quality instructional materials on how to design, conduct, report, and evaluate systematic reviews. In response to this need, Vanderbilt University Medical Center's Center for Knowledge Management (CKM) created an open-access systematic review instructional video series. The educational content was created by experienced CKM information scientists, who worked together to adapt an internal training series that they had developed into a format that could be widely shared with the public. Brief videos, averaging 10 minutes in length, were created addressing essential concepts related to systematic reviews, including distinguishing between literature review types, understanding reasons for conducting a systematic review, designing a systematic review protocol, steps in conducting a systematic review, web-based tools to aid with the systematic review process, publishing a systematic review, and critically evaluating systematic reviews. Quiz questions were developed for each instructional video to allow learners to check their understanding of the material. The systematic review instructional video series launched on CKM's Scholarly Publishing Information Hub (SPI-Hub™) website in Fall 2023. From January through August 2024, there were 1,662 international accesses to the SPI-Hub™ systematic review website, representing 41 countries. Initial feedback, while primarily anecdotal, has been positive. By adapting its internal systematic review training into an online video series format suitable for asynchronous instruction, CKM has been able to widely disseminate its educational materials.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"98-100"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This project investigated the potential of generative AI models in aiding health sciences librarians with collection development. Researchers at Chapman University's Harry and Diane Rinker Health Science campus evaluated four generative AI models-ChatGPT 4.0, Google Gemini, Perplexity, and Microsoft Copilot-over six months starting in March 2024. Two prompts were used: one to generate recent eBook titles in specific health sciences fields and another to identify subject gaps in the existing collection. The first prompt revealed inconsistencies across models, with Copilot and Perplexity providing sources but also inaccuracies. The second prompt yielded more useful results, with all models offering helpful analysis and accurate Library of Congress call numbers. The findings suggest that Large Language Models (LLMs) are not yet reliable as primary tools for collection development due to inaccuracies and hallucinations. However, they can serve as supplementary tools for analyzing subject coverage and identifying gaps in health sciences collections.
{"title":"Making the most of Artificial Intelligence and Large Language Models to support collection development in health sciences libraries.","authors":"Ivan Portillo, David Carson","doi":"10.5195/jmla.2025.2079","DOIUrl":"10.5195/jmla.2025.2079","url":null,"abstract":"<p><p>This project investigated the potential of generative AI models in aiding health sciences librarians with collection development. Researchers at Chapman University's Harry and Diane Rinker Health Science campus evaluated four generative AI models-ChatGPT 4.0, Google Gemini, Perplexity, and Microsoft Copilot-over six months starting in March 2024. Two prompts were used: one to generate recent eBook titles in specific health sciences fields and another to identify subject gaps in the existing collection. The first prompt revealed inconsistencies across models, with Copilot and Perplexity providing sources but also inaccuracies. The second prompt yielded more useful results, with all models offering helpful analysis and accurate Library of Congress call numbers. The findings suggest that Large Language Models (LLMs) are not yet reliable as primary tools for collection development due to inaccuracies and hallucinations. However, they can serve as supplementary tools for analyzing subject coverage and identifying gaps in health sciences collections.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"92-93"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835035/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mallory N Blasingame, Taneya Y Koonce, Annette M Williams, Dario A Giuse, Jing Su, Poppy A Krump, Nunzia Bettinsoli Giuse
Objective: This study investigated the performance of a generative artificial intelligence (AI) tool using GPT-4 in answering clinical questions in comparison with medical librarians' gold-standard evidence syntheses.
Methods: Questions were extracted from an in-house database of clinical evidence requests previously answered by medical librarians. Questions with multiple parts were subdivided into individual topics. A standardized prompt was developed using the COSTAR framework. Librarians submitted each question into aiChat, an internally managed chat tool using GPT-4, and recorded the responses. The summaries generated by aiChat were evaluated on whether they contained the critical elements used in the established gold-standard summary of the librarian. A subset of questions was randomly selected for verification of references provided by aiChat.
Results: Of the 216 evaluated questions, aiChat's response was assessed as "correct" for 180 (83.3%) questions, "partially correct" for 35 (16.2%) questions, and "incorrect" for 1 (0.5%) question. No significant differences were observed in question ratings by question category (p=0.73). For a subset of 30% (n=66) of questions, 162 references were provided in the aiChat summaries, and 60 (37%) were confirmed as nonfabricated.
Conclusions: Overall, the performance of a generative AI tool was promising. However, many included references could not be independently verified, and attempts were not made to assess whether any additional concepts introduced by aiChat were factually accurate. Thus, we envision this being the first of a series of investigations designed to further our understanding of how current and future versions of generative AI can be used and integrated into medical librarians' workflow.
{"title":"Evaluating a large language model's ability to answer clinicians' requests for evidence summaries.","authors":"Mallory N Blasingame, Taneya Y Koonce, Annette M Williams, Dario A Giuse, Jing Su, Poppy A Krump, Nunzia Bettinsoli Giuse","doi":"10.5195/jmla.2025.1985","DOIUrl":"10.5195/jmla.2025.1985","url":null,"abstract":"<p><strong>Objective: </strong>This study investigated the performance of a generative artificial intelligence (AI) tool using GPT-4 in answering clinical questions in comparison with medical librarians' gold-standard evidence syntheses.</p><p><strong>Methods: </strong>Questions were extracted from an in-house database of clinical evidence requests previously answered by medical librarians. Questions with multiple parts were subdivided into individual topics. A standardized prompt was developed using the COSTAR framework. Librarians submitted each question into aiChat, an internally managed chat tool using GPT-4, and recorded the responses. The summaries generated by aiChat were evaluated on whether they contained the critical elements used in the established gold-standard summary of the librarian. A subset of questions was randomly selected for verification of references provided by aiChat.</p><p><strong>Results: </strong>Of the 216 evaluated questions, aiChat's response was assessed as \"correct\" for 180 (83.3%) questions, \"partially correct\" for 35 (16.2%) questions, and \"incorrect\" for 1 (0.5%) question. No significant differences were observed in question ratings by question category (p=0.73). For a subset of 30% (n=66) of questions, 162 references were provided in the aiChat summaries, and 60 (37%) were confirmed as nonfabricated.</p><p><strong>Conclusions: </strong>Overall, the performance of a generative AI tool was promising. However, many included references could not be independently verified, and attempts were not made to assess whether any additional concepts introduced by aiChat were factually accurate. Thus, we envision this being the first of a series of investigations designed to further our understanding of how current and future versions of generative AI can be used and integrated into medical librarians' workflow.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"65-77"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835037/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beginning in 2012, the Virtual Projects section of the Journal of the Medical Library Association has provided an opportunity for library leaders and technology experts to share with others how new technologies are being adopted by health sciences libraries. From educational purposes to online tools that enhance library services or access to resources, the Virtual Projects section brings technology use examples to the forefront. The new publication issue for future Virtual Projects sections will be January and the call for submissions and Virtual Projects deadline will now take place in June and July.
{"title":"<i>JMLA</i> virtual projects continue to show impact of technologies in health sciences libraries.","authors":"Emily Hurst","doi":"10.5195/jmla.2025.2102","DOIUrl":"10.5195/jmla.2025.2102","url":null,"abstract":"<p><p>Beginning in 2012, the Virtual Projects section of the <i>Journal of the Medical Library Association</i> has provided an opportunity for library leaders and technology experts to share with others how new technologies are being adopted by health sciences libraries. From educational purposes to online tools that enhance library services or access to resources, the Virtual Projects section brings technology use examples to the forefront. The new publication issue for future Virtual Projects sections will be January and the call for submissions and Virtual Projects deadline will now take place in June and July.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"85"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer Horton, David Kaunelis, Danielle Rabb, Andrea Smith
Objective: This study investigates the effectiveness of bibliographic databases to retrieve qualitative studies for use in systematic and rapid reviews in Health Technology Assessment (HTA) research. Qualitative research is becoming more prevalent in reviews and health technology assessment, but standardized search methodologies-particularly regarding database selection-are still in development.
Methods: To determine how commonly used databases (MEDLINE, CINAHL, PsycINFO, Scopus, and Web of Science) perform, a comprehensive list of relevant journal titles was compiled using InCites Journal Citation Reports and validated by qualitative researchers at Canada's Drug Agency (formerly CADTH). This list was used to evaluate the qualitative holdings of each database, by calculating the percentage of total titles held in each database, as well as the number of unique titles per database.
Results: While publications on qualitative search methodology generally recommend subject-specific health databases including MEDLINE, CINAHL, and PsycINFO, this study found that multidisciplinary citation indexes Scopus and Web of Science Core Collection not only had the highest percentages of total titles held, but also a higher number of unique titles.
Conclusions: These indexes have potential utility in qualitative search strategies, if only for supplementing other database searches with unique records. This potential was investigated via tests on qualitative rapid review search strategies translated to Scopus to determine how the index may contribute relevant literature.
目的:探讨书目数据库在卫生技术评价(HTA)研究中检索定性研究的有效性。定性研究在综述和卫生技术评估中变得越来越普遍,但是标准化的搜索方法——特别是关于数据库选择的方法——仍在发展中。方法:为了确定常用数据库(MEDLINE, CINAHL, PsycINFO, Scopus和Web of Science)的表现,使用InCites期刊引文报告编制了相关期刊标题的综合列表,并由加拿大药品管理局(前身为CADTH)的定性研究人员进行了验证。通过计算每个数据库中所持有的图书总数的百分比以及每个数据库中唯一图书的数量,该列表用于评估每个数据库的定性藏书。结果:虽然采用定性搜索方法的出版物通常推荐特定学科的健康数据库,包括MEDLINE、CINAHL和PsycINFO,但本研究发现,多学科引文索引Scopus和Web of Science Core Collection不仅拥有最高的总标题百分比,而且拥有更多的独特标题。结论:这些索引在定性搜索策略中具有潜在的效用,如果仅仅用于补充其他具有唯一记录的数据库搜索。通过对翻译到Scopus的定性快速回顾搜索策略的测试,研究了这种潜力,以确定该索引如何贡献相关文献。
{"title":"What's beyond the core? Database coverage in qualitative information retrieval.","authors":"Jennifer Horton, David Kaunelis, Danielle Rabb, Andrea Smith","doi":"10.5195/jmla.2025.1591","DOIUrl":"10.5195/jmla.2025.1591","url":null,"abstract":"<p><strong>Objective: </strong>This study investigates the effectiveness of bibliographic databases to retrieve qualitative studies for use in systematic and rapid reviews in Health Technology Assessment (HTA) research. Qualitative research is becoming more prevalent in reviews and health technology assessment, but standardized search methodologies-particularly regarding database selection-are still in development.</p><p><strong>Methods: </strong>To determine how commonly used databases (MEDLINE, CINAHL, PsycINFO, Scopus, and Web of Science) perform, a comprehensive list of relevant journal titles was compiled using InCites Journal Citation Reports and validated by qualitative researchers at Canada's Drug Agency (formerly CADTH). This list was used to evaluate the qualitative holdings of each database, by calculating the percentage of total titles held in each database, as well as the number of unique titles per database.</p><p><strong>Results: </strong>While publications on qualitative search methodology generally recommend subject-specific health databases including MEDLINE, CINAHL, and PsycINFO, this study found that multidisciplinary citation indexes Scopus and Web of Science Core Collection not only had the highest percentages of total titles held, but also a higher number of unique titles.</p><p><strong>Conclusions: </strong>These indexes have potential utility in qualitative search strategies, if only for supplementing other database searches with unique records. This potential was investigated via tests on qualitative rapid review search strategies translated to Scopus to determine how the index may contribute relevant literature.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"49-57"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835044/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashleigh J Rich, Emma L McGorray, Carrie Baldwin-SoRelle, Michelle Cawley, Karen Grigg, Lauren B Beach, Gregory Phillips, Tonia Poteat
Objective: Sexual and gender minority (SGM) populations experience health disparities compared to heterosexual and cisgender populations. The development of accurate, comprehensive sexual orientation and gender identity (SOGI) measures is fundamental to quantify and address SGM disparities, which first requires identifying SOGI-related research. As part of a larger project reviewing and synthesizing how SOGI has been assessed within the health literature, we provide an example of the application of automated tools for systematic reviews to the area of SOGI measurement.
Methods: In collaboration with research librarians, a three-phase approach was used to prioritize screening for a set of 11,441 SOGI measurement studies published since 2012. In Phase 1, search results were stratified into two groups (title with vs. without measurement-related terms); titles with measurement-related terms were manually screened. In Phase 2, supervised clustering using DoCTER software was used to sort the remaining studies based on relevance. In Phase 3, supervised machine learning using DoCTER was used to further identify which studies deemed low relevance in Phase 2 should be prioritized for manual screening.
Results: 1,607 studies were identified in Phase 1. Across Phases 2 and 3, the research team excluded 5,056 of the remaining 9,834 studies using DoCTER. In manual review, the percentage of relevant studies in results screened manually was low, ranging from 0.1 to 7.8 percent.
Conclusions: Automated tools used in collaboration with research librarians have the potential to save hundreds of hours of human labor in large-scale systematic reviews of SGM health research.
{"title":"Automated tools for systematic review screening methods: an application of machine learning for sexual orientation and gender identity measurement in health research.","authors":"Ashleigh J Rich, Emma L McGorray, Carrie Baldwin-SoRelle, Michelle Cawley, Karen Grigg, Lauren B Beach, Gregory Phillips, Tonia Poteat","doi":"10.5195/jmla.2025.1860","DOIUrl":"10.5195/jmla.2025.1860","url":null,"abstract":"<p><strong>Objective: </strong>Sexual and gender minority (SGM) populations experience health disparities compared to heterosexual and cisgender populations. The development of accurate, comprehensive sexual orientation and gender identity (SOGI) measures is fundamental to quantify and address SGM disparities, which first requires identifying SOGI-related research. As part of a larger project reviewing and synthesizing how SOGI has been assessed within the health literature, we provide an example of the application of automated tools for systematic reviews to the area of SOGI measurement.</p><p><strong>Methods: </strong>In collaboration with research librarians, a three-phase approach was used to prioritize screening for a set of 11,441 SOGI measurement studies published since 2012. In Phase 1, search results were stratified into two groups (title with vs. without measurement-related terms); titles with measurement-related terms were manually screened. In Phase 2, supervised clustering using DoCTER software was used to sort the remaining studies based on relevance. In Phase 3, supervised machine learning using DoCTER was used to further identify which studies deemed low relevance in Phase 2 should be prioritized for manual screening.</p><p><strong>Results: </strong>1,607 studies were identified in Phase 1. Across Phases 2 and 3, the research team excluded 5,056 of the remaining 9,834 studies using DoCTER. In manual review, the percentage of relevant studies in results screened manually was low, ranging from 0.1 to 7.8 percent.</p><p><strong>Conclusions: </strong>Automated tools used in collaboration with research librarians have the potential to save hundreds of hours of human labor in large-scale systematic reviews of SGM health research.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"31-38"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835039/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jill T Boruff, Michelle Kraft, Alexander J Carroll
In the April 2019 issue (Vol. 106 No. 3), the Journal of the Medical Library Association (JMLA) debuted its Case Report publication category. In the years following this decision, the Case Reports category has grown into an integral component of JMLA. In this editorial, the JMLA Editorial Team highlights the value of case reports and outlines strategies authors can use to draft impactful manuscripts for this category.
{"title":"Revisiting <i>JMLA</i> case reports: a publication category for driving innovation in health sciences librarianship.","authors":"Jill T Boruff, Michelle Kraft, Alexander J Carroll","doi":"10.5195/jmla.2025.2099","DOIUrl":"10.5195/jmla.2025.2099","url":null,"abstract":"<p><p>In the April 2019 issue (Vol. 106 No. 3), the <i>Journal of the Medical Library Association (JMLA)</i> debuted its Case Report publication category. In the years following this decision, the Case Reports category has grown into an integral component of <i>JMLA</i>. In this editorial, the <i>JMLA</i> Editorial Team highlights the value of case reports and outlines strategies authors can use to draft impactful manuscripts for this category.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"1-3"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835043/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prompted by increasing requests for assistance with research evaluation from faculty researchers and university leadership, faculty librarians at the University of Tennessee Health Science Center (UTHSC) launched an innovative Research Impact Challenge in 2023. This Challenge was inspired by the University of Michigan's model and tailored to the needs of health sciences researchers. This asynchronous event aimed to empower early-career researchers and faculty seeking promotion and tenure by enhancing their online scholarly presence and understanding of how scholarship is tracked and evaluated. A team of diverse experts crafted an engaging learning experience through the strategic use of technology and design. Scribe slideshows and videos offered dynamic instruction, while written content and worksheets facilitated engagement and reflection. The Research Impact Challenge LibGuide, expertly designed with HTML and CSS, served as the central platform, ensuring intuitive navigation and easy access to resources (https://libguides.uthsc.edu/impactchallenge). User interface design prioritized simplicity and accessibility, accommodating diverse learning preferences and technical skills. This innovative project addressed common challenges faced by researchers and demonstrated the impactful use of technology in creating an adaptable and inclusive educational experience. The Research Impact Challenge exemplifies how academic libraries can harness technology to foster scholarly growth and support research impact in the health sciences.
{"title":"Designing for impact: a case study of UTHSC's research impact challenge.","authors":"Jess Newman McDonald, Annabelle L Holt","doi":"10.5195/jmla.2025.2085","DOIUrl":"10.5195/jmla.2025.2085","url":null,"abstract":"<p><p>Prompted by increasing requests for assistance with research evaluation from faculty researchers and university leadership, faculty librarians at the University of Tennessee Health Science Center (UTHSC) launched an innovative Research Impact Challenge in 2023. This Challenge was inspired by the University of Michigan's model and tailored to the needs of health sciences researchers. This asynchronous event aimed to empower early-career researchers and faculty seeking promotion and tenure by enhancing their online scholarly presence and understanding of how scholarship is tracked and evaluated. A team of diverse experts crafted an engaging learning experience through the strategic use of technology and design. Scribe slideshows and videos offered dynamic instruction, while written content and worksheets facilitated engagement and reflection. The Research Impact Challenge LibGuide, expertly designed with HTML and CSS, served as the central platform, ensuring intuitive navigation and easy access to resources (https://libguides.uthsc.edu/impactchallenge). User interface design prioritized simplicity and accessibility, accommodating diverse learning preferences and technical skills. This innovative project addressed common challenges faced by researchers and demonstrated the impactful use of technology in creating an adaptable and inclusive educational experience. The Research Impact Challenge exemplifies how academic libraries can harness technology to foster scholarly growth and support research impact in the health sciences.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"90-91"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835040/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}