Aida Marissa Smith, Alexia Estabrook, Mary A Hyde, Michele Matucheski, Eleanor Shanklin Truex
The Ascension Nurse Author Index is an example of how resource-limited clinical libraries can provide value to their organization by creating a database of peer-reviewed journal article publications authored by their nursing associates. In 2024, Ascension launched a database index to highlight its nurse authors, bring attention to subject matter expertise, foster collaboration among authors, and recognize impact within the profession. The index uses an open access platform, software intended for reference management with a public-facing cloud option, to minimize expenses. This unconventional use of the platform allowed us to capitalize on the software's bibliographic database management capabilities while allowing us to input institutional-specific metadata. By creative use of the open-access platform, librarians can successfully partner to create value for their organization by highlighting the work of its nurses.
{"title":"Leveraging an open access platform to provide organizational value in clinical environments.","authors":"Aida Marissa Smith, Alexia Estabrook, Mary A Hyde, Michele Matucheski, Eleanor Shanklin Truex","doi":"10.5195/jmla.2025.2086","DOIUrl":"10.5195/jmla.2025.2086","url":null,"abstract":"<p><p>The Ascension Nurse Author Index is an example of how resource-limited clinical libraries can provide value to their organization by creating a database of peer-reviewed journal article publications authored by their nursing associates. In 2024, Ascension launched a database index to highlight its nurse authors, bring attention to subject matter expertise, foster collaboration among authors, and recognize impact within the profession. The index uses an open access platform, software intended for reference management with a public-facing cloud option, to minimize expenses. This unconventional use of the platform allowed us to capitalize on the software's bibliographic database management capabilities while allowing us to input institutional-specific metadata. By creative use of the open-access platform, librarians can successfully partner to create value for their organization by highlighting the work of its nurses.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"94-95"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835042/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ChatGPT (version 4.0, March 14, 2024). OpenAI, San Francisco, CA, USA. https://chat.openai.com; free and subscription plans available. Python (version 3.12.1, October 2, 2024). Python Software Foundation, Beaverton, OR, USA. https://www.python.org; free, open-source. Microsoft Excel (version 365). Microsoft Corporation, Redmond, WA, USA. https://www.microsoft.com/excel; proprietary software, subscription-based.
{"title":"ChatGPT, Python, and Microsoft Excel.","authors":"Kaique Sbampato, Humberto Arruda, Édison Renato Silva","doi":"10.5195/jmla.2025.2065","DOIUrl":"10.5195/jmla.2025.2065","url":null,"abstract":"<p><p><b>ChatGPT (version 4.0, March 14, 2024).</b> OpenAI, San Francisco, CA, USA. https://chat.openai.com; free and subscription plans available. <b>Python (version 3.12.1, October 2, 2024).</b> Python Software Foundation, Beaverton, OR, USA. https://www.python.org; free, open-source. <b>Microsoft Excel (version 365).</b> Microsoft Corporation, Redmond, WA, USA. https://www.microsoft.com/excel; proprietary software, subscription-based.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"110-112"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835046/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marie Ascher, Margaret A Hoogland, Karen Heskett, Heather N Holmes, Jonathan D Eldredge
Objective: This research project sought to identify those subject areas that leaders and researcher members of the Medical Library Association (MLA) determined to be of greatest importance for research investigation. It updates two previous studies conducted in 2008 and 2011.
Methods: The project involved a three-step Delphi process aimed at collecting the most important and researchable questions facing the health sciences librarianship profession. First, 495 MLA leaders were asked to submit questions answerable by known research methods. Submitted questions could not exceed 50 words in length. There were 130 viable, unique questions submitted by MLA leaders. Second, the authors asked 200 eligible MLA-member researchers to select the five (5) most important and answerable questions from the list of 130 questions. Third, the same 130 MLA leaders who initially submitted questions were asked to select their top five (5) most important and answerable questions from the 36 top-ranked questions identified by the researchers.
Results: The final 15 questions resulting from the three phases of the study will serve as the next priorities of the MLA Research Agenda. The authors will be facilitating the organization of teams of volunteers wishing to conduct research studies related to these identified top 15 research questions.
Conclusion: The new 2024 MLA Research Agenda will enable the health information professions to allocate scarce resources toward high-yield research studies. The Agenda could be used by journal editors and annual meeting organizers to prioritize submissions for research communications. The Agenda will provide aspiring researchers with some starting points and justification for pursuing research projects on these questions.
{"title":"Making an impact: the new 2024 Medical Library Association research agenda.","authors":"Marie Ascher, Margaret A Hoogland, Karen Heskett, Heather N Holmes, Jonathan D Eldredge","doi":"10.5195/jmla.2025.1955","DOIUrl":"10.5195/jmla.2025.1955","url":null,"abstract":"<p><strong>Objective: </strong>This research project sought to identify those subject areas that leaders and researcher members of the Medical Library Association (MLA) determined to be of greatest importance for research investigation. It updates two previous studies conducted in 2008 and 2011.</p><p><strong>Methods: </strong>The project involved a three-step Delphi process aimed at collecting the most important and researchable questions facing the health sciences librarianship profession. First, 495 MLA leaders were asked to submit questions answerable by known research methods. Submitted questions could not exceed 50 words in length. There were 130 viable, unique questions submitted by MLA leaders. Second, the authors asked 200 eligible MLA-member researchers to select the five (5) most important and answerable questions from the list of 130 questions. Third, the same 130 MLA leaders who initially submitted questions were asked to select their top five (5) most important and answerable questions from the 36 top-ranked questions identified by the researchers.</p><p><strong>Results: </strong>The final 15 questions resulting from the three phases of the study will serve as the next priorities of the MLA Research Agenda. The authors will be facilitating the organization of teams of volunteers wishing to conduct research studies related to these identified top 15 research questions.</p><p><strong>Conclusion: </strong>The new 2024 MLA Research Agenda will enable the health information professions to allocate scarce resources toward high-yield research studies. The Agenda could be used by journal editors and annual meeting organizers to prioritize submissions for research communications. The Agenda will provide aspiring researchers with some starting points and justification for pursuing research projects on these questions.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"24-30"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835027/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital Object Identifiers (DOIs) are a key persistent identifier in the publishing landscape to ensure the discoverability and citation of research products. Minting DOIs can be a time-consuming task for repository librarians. This process can be automated since the metadata for DOIs is already in the repository record and DataCite, a DOI minting organization, and Open Repository, a DSpace repository platform, both have application programming interfaces (APIs). Existing software enables bulk DOI minting. However, the institutional repository at UMass Chan Medical School contains a mixture of original materials that need DOIs (dissertations, reports, data, etc.) and previously published materials that already have DOIs such as journal articles. An institutional repository librarian and her librarian colleague with Python experience embarked on a paired programming project to create a script to mint DOIs on demand in DataCite for individual items in the institution's Open Repository instance. The pair met for one hour each week to develop and test the script using combined skills in institutional repositories, metadata, DOI minting, coding in Python, APIs, and data cleaning. The project was a great learning opportunity for both librarians to improve their Python coding skills. The new script makes the DOI minting process more efficient, enhances metadata in DataCite, and improves accuracy. Future script enhancements such as automatically updating repository metadata with the new DOI are planned after the repository upgrade to DSpace 7.
{"title":"Individual DOI minting for Open Repository: a script for creating a DOI on demand for a DSpace repository.","authors":"Tess Grynoch, Lisa A Palmer","doi":"10.5195/jmla.2025.2076","DOIUrl":"https://doi.org/10.5195/jmla.2025.2076","url":null,"abstract":"<p><p>Digital Object Identifiers (DOIs) are a key persistent identifier in the publishing landscape to ensure the discoverability and citation of research products. Minting DOIs can be a time-consuming task for repository librarians. This process can be automated since the metadata for DOIs is already in the repository record and DataCite, a DOI minting organization, and Open Repository, a DSpace repository platform, both have application programming interfaces (APIs). Existing software enables bulk DOI minting. However, the institutional repository at UMass Chan Medical School contains a mixture of original materials that need DOIs (dissertations, reports, data, etc.) and previously published materials that already have DOIs such as journal articles. An institutional repository librarian and her librarian colleague with Python experience embarked on a paired programming project to create a script to mint DOIs on demand in DataCite for individual items in the institution's Open Repository instance. The pair met for one hour each week to develop and test the script using combined skills in institutional repositories, metadata, DOI minting, coding in Python, APIs, and data cleaning. The project was a great learning opportunity for both librarians to improve their Python coding skills. The new script makes the DOI minting process more efficient, enhances metadata in DataCite, and improves accuracy. Future script enhancements such as automatically updating repository metadata with the new DOI are planned after the repository upgrade to DSpace 7.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"86-87"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835045/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Health sciences and hospital libraries often face challenges in planning and organizing events due to limited resources and staff. At Stanford School of Medicine's Lane Library, librarians turned to artificial intelligence (AI) tools to address this issue and successfully manage various events, from small workshops to larger, more complex conferences. This article presents a case study on how to effectively integrate generative AI tools into the event planning process, improving efficiency and freeing staff to focus on higher-level tasks.
{"title":"Leveraging AI tools for streamlined library event planning: a case study from Lane Medical Library.","authors":"Boglarka Huddleston, Colleen Cuddy","doi":"10.5195/jmla.2025.2087","DOIUrl":"https://doi.org/10.5195/jmla.2025.2087","url":null,"abstract":"<p><p>Health sciences and hospital libraries often face challenges in planning and organizing events due to limited resources and staff. At Stanford School of Medicine's Lane Library, librarians turned to artificial intelligence (AI) tools to address this issue and successfully manage various events, from small workshops to larger, more complex conferences. This article presents a case study on how to effectively integrate generative AI tools into the event planning process, improving efficiency and freeing staff to focus on higher-level tasks.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"88-89"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835050/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This project investigated the potential of generative AI models in aiding health sciences librarians with collection development. Researchers at Chapman University's Harry and Diane Rinker Health Science campus evaluated four generative AI models-ChatGPT 4.0, Google Gemini, Perplexity, and Microsoft Copilot-over six months starting in March 2024. Two prompts were used: one to generate recent eBook titles in specific health sciences fields and another to identify subject gaps in the existing collection. The first prompt revealed inconsistencies across models, with Copilot and Perplexity providing sources but also inaccuracies. The second prompt yielded more useful results, with all models offering helpful analysis and accurate Library of Congress call numbers. The findings suggest that Large Language Models (LLMs) are not yet reliable as primary tools for collection development due to inaccuracies and hallucinations. However, they can serve as supplementary tools for analyzing subject coverage and identifying gaps in health sciences collections.
{"title":"Making the most of Artificial Intelligence and Large Language Models to support collection development in health sciences libraries.","authors":"Ivan Portillo, David Carson","doi":"10.5195/jmla.2025.2079","DOIUrl":"10.5195/jmla.2025.2079","url":null,"abstract":"<p><p>This project investigated the potential of generative AI models in aiding health sciences librarians with collection development. Researchers at Chapman University's Harry and Diane Rinker Health Science campus evaluated four generative AI models-ChatGPT 4.0, Google Gemini, Perplexity, and Microsoft Copilot-over six months starting in March 2024. Two prompts were used: one to generate recent eBook titles in specific health sciences fields and another to identify subject gaps in the existing collection. The first prompt revealed inconsistencies across models, with Copilot and Perplexity providing sources but also inaccuracies. The second prompt yielded more useful results, with all models offering helpful analysis and accurate Library of Congress call numbers. The findings suggest that Large Language Models (LLMs) are not yet reliable as primary tools for collection development due to inaccuracies and hallucinations. However, they can serve as supplementary tools for analyzing subject coverage and identifying gaps in health sciences collections.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"92-93"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835035/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sheila V Kusnoor, Annette M Williams, Taneya Y Koonce, Poppy A Krump, Lori A Harding, Jerry Zhao, John D Clark, Nunzia B Giuse
Given the key role of systematic reviews in informing clinical decision making and guidelines, it is important for individuals to have equitable access to quality instructional materials on how to design, conduct, report, and evaluate systematic reviews. In response to this need, Vanderbilt University Medical Center's Center for Knowledge Management (CKM) created an open-access systematic review instructional video series. The educational content was created by experienced CKM information scientists, who worked together to adapt an internal training series that they had developed into a format that could be widely shared with the public. Brief videos, averaging 10 minutes in length, were created addressing essential concepts related to systematic reviews, including distinguishing between literature review types, understanding reasons for conducting a systematic review, designing a systematic review protocol, steps in conducting a systematic review, web-based tools to aid with the systematic review process, publishing a systematic review, and critically evaluating systematic reviews. Quiz questions were developed for each instructional video to allow learners to check their understanding of the material. The systematic review instructional video series launched on CKM's Scholarly Publishing Information Hub (SPI-Hub™) website in Fall 2023. From January through August 2024, there were 1,662 international accesses to the SPI-Hub™ systematic review website, representing 41 countries. Initial feedback, while primarily anecdotal, has been positive. By adapting its internal systematic review training into an online video series format suitable for asynchronous instruction, CKM has been able to widely disseminate its educational materials.
{"title":"Development of an open access systematic review instructional video series accessible through the SPI-Hub™ website.","authors":"Sheila V Kusnoor, Annette M Williams, Taneya Y Koonce, Poppy A Krump, Lori A Harding, Jerry Zhao, John D Clark, Nunzia B Giuse","doi":"10.5195/jmla.2025.2078","DOIUrl":"10.5195/jmla.2025.2078","url":null,"abstract":"<p><p>Given the key role of systematic reviews in informing clinical decision making and guidelines, it is important for individuals to have equitable access to quality instructional materials on how to design, conduct, report, and evaluate systematic reviews. In response to this need, Vanderbilt University Medical Center's Center for Knowledge Management (CKM) created an open-access systematic review instructional video series. The educational content was created by experienced CKM information scientists, who worked together to adapt an internal training series that they had developed into a format that could be widely shared with the public. Brief videos, averaging 10 minutes in length, were created addressing essential concepts related to systematic reviews, including distinguishing between literature review types, understanding reasons for conducting a systematic review, designing a systematic review protocol, steps in conducting a systematic review, web-based tools to aid with the systematic review process, publishing a systematic review, and critically evaluating systematic reviews. Quiz questions were developed for each instructional video to allow learners to check their understanding of the material. The systematic review instructional video series launched on CKM's Scholarly Publishing Information Hub (SPI-Hub™) website in Fall 2023. From January through August 2024, there were 1,662 international accesses to the SPI-Hub™ systematic review website, representing 41 countries. Initial feedback, while primarily anecdotal, has been positive. By adapting its internal systematic review training into an online video series format suitable for asynchronous instruction, CKM has been able to widely disseminate its educational materials.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"98-100"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mallory N Blasingame, Taneya Y Koonce, Annette M Williams, Dario A Giuse, Jing Su, Poppy A Krump, Nunzia Bettinsoli Giuse
Objective: This study investigated the performance of a generative artificial intelligence (AI) tool using GPT-4 in answering clinical questions in comparison with medical librarians' gold-standard evidence syntheses.
Methods: Questions were extracted from an in-house database of clinical evidence requests previously answered by medical librarians. Questions with multiple parts were subdivided into individual topics. A standardized prompt was developed using the COSTAR framework. Librarians submitted each question into aiChat, an internally managed chat tool using GPT-4, and recorded the responses. The summaries generated by aiChat were evaluated on whether they contained the critical elements used in the established gold-standard summary of the librarian. A subset of questions was randomly selected for verification of references provided by aiChat.
Results: Of the 216 evaluated questions, aiChat's response was assessed as "correct" for 180 (83.3%) questions, "partially correct" for 35 (16.2%) questions, and "incorrect" for 1 (0.5%) question. No significant differences were observed in question ratings by question category (p=0.73). For a subset of 30% (n=66) of questions, 162 references were provided in the aiChat summaries, and 60 (37%) were confirmed as nonfabricated.
Conclusions: Overall, the performance of a generative AI tool was promising. However, many included references could not be independently verified, and attempts were not made to assess whether any additional concepts introduced by aiChat were factually accurate. Thus, we envision this being the first of a series of investigations designed to further our understanding of how current and future versions of generative AI can be used and integrated into medical librarians' workflow.
{"title":"Evaluating a large language model's ability to answer clinicians' requests for evidence summaries.","authors":"Mallory N Blasingame, Taneya Y Koonce, Annette M Williams, Dario A Giuse, Jing Su, Poppy A Krump, Nunzia Bettinsoli Giuse","doi":"10.5195/jmla.2025.1985","DOIUrl":"10.5195/jmla.2025.1985","url":null,"abstract":"<p><strong>Objective: </strong>This study investigated the performance of a generative artificial intelligence (AI) tool using GPT-4 in answering clinical questions in comparison with medical librarians' gold-standard evidence syntheses.</p><p><strong>Methods: </strong>Questions were extracted from an in-house database of clinical evidence requests previously answered by medical librarians. Questions with multiple parts were subdivided into individual topics. A standardized prompt was developed using the COSTAR framework. Librarians submitted each question into aiChat, an internally managed chat tool using GPT-4, and recorded the responses. The summaries generated by aiChat were evaluated on whether they contained the critical elements used in the established gold-standard summary of the librarian. A subset of questions was randomly selected for verification of references provided by aiChat.</p><p><strong>Results: </strong>Of the 216 evaluated questions, aiChat's response was assessed as \"correct\" for 180 (83.3%) questions, \"partially correct\" for 35 (16.2%) questions, and \"incorrect\" for 1 (0.5%) question. No significant differences were observed in question ratings by question category (p=0.73). For a subset of 30% (n=66) of questions, 162 references were provided in the aiChat summaries, and 60 (37%) were confirmed as nonfabricated.</p><p><strong>Conclusions: </strong>Overall, the performance of a generative AI tool was promising. However, many included references could not be independently verified, and attempts were not made to assess whether any additional concepts introduced by aiChat were factually accurate. Thus, we envision this being the first of a series of investigations designed to further our understanding of how current and future versions of generative AI can be used and integrated into medical librarians' workflow.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"65-77"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835037/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beginning in 2012, the Virtual Projects section of the Journal of the Medical Library Association has provided an opportunity for library leaders and technology experts to share with others how new technologies are being adopted by health sciences libraries. From educational purposes to online tools that enhance library services or access to resources, the Virtual Projects section brings technology use examples to the forefront. The new publication issue for future Virtual Projects sections will be January and the call for submissions and Virtual Projects deadline will now take place in June and July.
{"title":"<i>JMLA</i> virtual projects continue to show impact of technologies in health sciences libraries.","authors":"Emily Hurst","doi":"10.5195/jmla.2025.2102","DOIUrl":"https://doi.org/10.5195/jmla.2025.2102","url":null,"abstract":"<p><p>Beginning in 2012, the Virtual Projects section of the <i>Journal of the Medical Library Association</i> has provided an opportunity for library leaders and technology experts to share with others how new technologies are being adopted by health sciences libraries. From educational purposes to online tools that enhance library services or access to resources, the Virtual Projects section brings technology use examples to the forefront. The new publication issue for future Virtual Projects sections will be January and the call for submissions and Virtual Projects deadline will now take place in June and July.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"85"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer Horton, David Kaunelis, Danielle Rabb, Andrea Smith
Objective: This study investigates the effectiveness of bibliographic databases to retrieve qualitative studies for use in systematic and rapid reviews in Health Technology Assessment (HTA) research. Qualitative research is becoming more prevalent in reviews and health technology assessment, but standardized search methodologies-particularly regarding database selection-are still in development.
Methods: To determine how commonly used databases (MEDLINE, CINAHL, PsycINFO, Scopus, and Web of Science) perform, a comprehensive list of relevant journal titles was compiled using InCites Journal Citation Reports and validated by qualitative researchers at Canada's Drug Agency (formerly CADTH). This list was used to evaluate the qualitative holdings of each database, by calculating the percentage of total titles held in each database, as well as the number of unique titles per database.
Results: While publications on qualitative search methodology generally recommend subject-specific health databases including MEDLINE, CINAHL, and PsycINFO, this study found that multidisciplinary citation indexes Scopus and Web of Science Core Collection not only had the highest percentages of total titles held, but also a higher number of unique titles.
Conclusions: These indexes have potential utility in qualitative search strategies, if only for supplementing other database searches with unique records. This potential was investigated via tests on qualitative rapid review search strategies translated to Scopus to determine how the index may contribute relevant literature.
{"title":"What's beyond the core? Database coverage in qualitative information retrieval.","authors":"Jennifer Horton, David Kaunelis, Danielle Rabb, Andrea Smith","doi":"10.5195/jmla.2025.1591","DOIUrl":"10.5195/jmla.2025.1591","url":null,"abstract":"<p><strong>Objective: </strong>This study investigates the effectiveness of bibliographic databases to retrieve qualitative studies for use in systematic and rapid reviews in Health Technology Assessment (HTA) research. Qualitative research is becoming more prevalent in reviews and health technology assessment, but standardized search methodologies-particularly regarding database selection-are still in development.</p><p><strong>Methods: </strong>To determine how commonly used databases (MEDLINE, CINAHL, PsycINFO, Scopus, and Web of Science) perform, a comprehensive list of relevant journal titles was compiled using InCites Journal Citation Reports and validated by qualitative researchers at Canada's Drug Agency (formerly CADTH). This list was used to evaluate the qualitative holdings of each database, by calculating the percentage of total titles held in each database, as well as the number of unique titles per database.</p><p><strong>Results: </strong>While publications on qualitative search methodology generally recommend subject-specific health databases including MEDLINE, CINAHL, and PsycINFO, this study found that multidisciplinary citation indexes Scopus and Web of Science Core Collection not only had the highest percentages of total titles held, but also a higher number of unique titles.</p><p><strong>Conclusions: </strong>These indexes have potential utility in qualitative search strategies, if only for supplementing other database searches with unique records. This potential was investigated via tests on qualitative rapid review search strategies translated to Scopus to determine how the index may contribute relevant literature.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"49-57"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835044/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}