首页 > 最新文献

Journal of the Medical Library Association最新文献

英文 中文
Development of an open access systematic review instructional video series accessible through the SPI-Hub™ website. 开发可通过SPI-Hub™网站访问的开放式系统评论教学视频系列。
IF 2.9 4区 医学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Pub Date : 2025-01-14 DOI: 10.5195/jmla.2025.2078
Sheila V Kusnoor, Annette M Williams, Taneya Y Koonce, Poppy A Krump, Lori A Harding, Jerry Zhao, John D Clark, Nunzia B Giuse

Given the key role of systematic reviews in informing clinical decision making and guidelines, it is important for individuals to have equitable access to quality instructional materials on how to design, conduct, report, and evaluate systematic reviews. In response to this need, Vanderbilt University Medical Center's Center for Knowledge Management (CKM) created an open-access systematic review instructional video series. The educational content was created by experienced CKM information scientists, who worked together to adapt an internal training series that they had developed into a format that could be widely shared with the public. Brief videos, averaging 10 minutes in length, were created addressing essential concepts related to systematic reviews, including distinguishing between literature review types, understanding reasons for conducting a systematic review, designing a systematic review protocol, steps in conducting a systematic review, web-based tools to aid with the systematic review process, publishing a systematic review, and critically evaluating systematic reviews. Quiz questions were developed for each instructional video to allow learners to check their understanding of the material. The systematic review instructional video series launched on CKM's Scholarly Publishing Information Hub (SPI-Hub™) website in Fall 2023. From January through August 2024, there were 1,662 international accesses to the SPI-Hub™ systematic review website, representing 41 countries. Initial feedback, while primarily anecdotal, has been positive. By adapting its internal systematic review training into an online video series format suitable for asynchronous instruction, CKM has been able to widely disseminate its educational materials.

鉴于系统评价在为临床决策和指导提供信息方面的关键作用,个人有公平的机会获得关于如何设计、实施、报告和评估系统评价的高质量指导材料是很重要的。为了满足这一需求,范德比尔特大学医学中心的知识管理中心(CKM)创建了一个开放获取的系统复习教学视频系列。教育内容是由经验丰富的CKM信息科学家创建的,他们共同努力将他们开发的内部培训系列改编成可以与公众广泛共享的格式。制作了平均长度为10分钟的简短视频,介绍了与系统综述相关的基本概念,包括区分文献综述类型、理解进行系统综述的原因、设计系统综述方案、进行系统综述的步骤、辅助系统综述过程的网络工具、发表系统综述以及对系统综述进行批判性评价。为每个教学视频开发了测验问题,以允许学习者检查他们对材料的理解。这套系统评论教学视频系列于2023年秋季在长江大学学术出版信息中心(SPI-Hub™)网站上推出。从2024年1月到8月,共有1662个国家访问SPI-Hub™系统评价网站,代表41个国家。最初的反馈虽然主要是道听途说,但都是积极的。通过将其内部系统审查培训改编为适合异步教学的在线视频系列格式,CKM能够广泛传播其教育材料。
{"title":"Development of an open access systematic review instructional video series accessible through the SPI-Hub™ website.","authors":"Sheila V Kusnoor, Annette M Williams, Taneya Y Koonce, Poppy A Krump, Lori A Harding, Jerry Zhao, John D Clark, Nunzia B Giuse","doi":"10.5195/jmla.2025.2078","DOIUrl":"10.5195/jmla.2025.2078","url":null,"abstract":"<p><p>Given the key role of systematic reviews in informing clinical decision making and guidelines, it is important for individuals to have equitable access to quality instructional materials on how to design, conduct, report, and evaluate systematic reviews. In response to this need, Vanderbilt University Medical Center's Center for Knowledge Management (CKM) created an open-access systematic review instructional video series. The educational content was created by experienced CKM information scientists, who worked together to adapt an internal training series that they had developed into a format that could be widely shared with the public. Brief videos, averaging 10 minutes in length, were created addressing essential concepts related to systematic reviews, including distinguishing between literature review types, understanding reasons for conducting a systematic review, designing a systematic review protocol, steps in conducting a systematic review, web-based tools to aid with the systematic review process, publishing a systematic review, and critically evaluating systematic reviews. Quiz questions were developed for each instructional video to allow learners to check their understanding of the material. The systematic review instructional video series launched on CKM's Scholarly Publishing Information Hub (SPI-Hub™) website in Fall 2023. From January through August 2024, there were 1,662 international accesses to the SPI-Hub™ systematic review website, representing 41 countries. Initial feedback, while primarily anecdotal, has been positive. By adapting its internal systematic review training into an online video series format suitable for asynchronous instruction, CKM has been able to widely disseminate its educational materials.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"98-100"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making the most of Artificial Intelligence and Large Language Models to support collection development in health sciences libraries. 利用人工智能和大型语言模型支持健康科学图书馆的馆藏发展。
IF 2.9 4区 医学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Pub Date : 2025-01-14 DOI: 10.5195/jmla.2025.2079
Ivan Portillo, David Carson

This project investigated the potential of generative AI models in aiding health sciences librarians with collection development. Researchers at Chapman University's Harry and Diane Rinker Health Science campus evaluated four generative AI models-ChatGPT 4.0, Google Gemini, Perplexity, and Microsoft Copilot-over six months starting in March 2024. Two prompts were used: one to generate recent eBook titles in specific health sciences fields and another to identify subject gaps in the existing collection. The first prompt revealed inconsistencies across models, with Copilot and Perplexity providing sources but also inaccuracies. The second prompt yielded more useful results, with all models offering helpful analysis and accurate Library of Congress call numbers. The findings suggest that Large Language Models (LLMs) are not yet reliable as primary tools for collection development due to inaccuracies and hallucinations. However, they can serve as supplementary tools for analyzing subject coverage and identifying gaps in health sciences collections.

该项目研究了生成式人工智能模型在帮助健康科学图书馆员进行馆藏开发方面的潜力。从2024年3月开始,查普曼大学Harry和Diane Rinker健康科学校区的研究人员在六个多月的时间里评估了四种生成式人工智能模型——chatgpt 4.0、谷歌Gemini、Perplexity和Microsoft copilot。使用了两个提示:一个用于生成特定健康科学领域的最新电子书标题,另一个用于识别现有集合中的主题空白。第一个提示揭示了模型之间的不一致,Copilot和Perplexity提供了来源,但也不准确。第二个提示产生了更有用的结果,所有模型都提供了有用的分析和准确的国会图书馆电话号码。研究结果表明,由于不准确和幻觉,大型语言模型(llm)作为集合开发的主要工具尚不可靠。然而,它们可以作为分析学科覆盖范围和确定卫生科学收藏差距的补充工具。
{"title":"Making the most of Artificial Intelligence and Large Language Models to support collection development in health sciences libraries.","authors":"Ivan Portillo, David Carson","doi":"10.5195/jmla.2025.2079","DOIUrl":"10.5195/jmla.2025.2079","url":null,"abstract":"<p><p>This project investigated the potential of generative AI models in aiding health sciences librarians with collection development. Researchers at Chapman University's Harry and Diane Rinker Health Science campus evaluated four generative AI models-ChatGPT 4.0, Google Gemini, Perplexity, and Microsoft Copilot-over six months starting in March 2024. Two prompts were used: one to generate recent eBook titles in specific health sciences fields and another to identify subject gaps in the existing collection. The first prompt revealed inconsistencies across models, with Copilot and Perplexity providing sources but also inaccuracies. The second prompt yielded more useful results, with all models offering helpful analysis and accurate Library of Congress call numbers. The findings suggest that Large Language Models (LLMs) are not yet reliable as primary tools for collection development due to inaccuracies and hallucinations. However, they can serve as supplementary tools for analyzing subject coverage and identifying gaps in health sciences collections.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"92-93"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835035/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JMLA virtual projects continue to show impact of technologies in health sciences libraries. JMLA虚拟项目继续显示技术对卫生科学图书馆的影响。
IF 2.9 4区 医学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Pub Date : 2025-01-14 DOI: 10.5195/jmla.2025.2102
Emily Hurst

Beginning in 2012, the Virtual Projects section of the Journal of the Medical Library Association has provided an opportunity for library leaders and technology experts to share with others how new technologies are being adopted by health sciences libraries. From educational purposes to online tools that enhance library services or access to resources, the Virtual Projects section brings technology use examples to the forefront. The new publication issue for future Virtual Projects sections will be January and the call for submissions and Virtual Projects deadline will now take place in June and July.

从2012年开始,《医学图书馆协会杂志》的虚拟项目部分为图书馆领导和技术专家提供了一个机会,与他人分享卫生科学图书馆如何采用新技术。从教育目的到增强图书馆服务或获取资源的在线工具,虚拟项目部分将技术使用示例带到了最前沿。未来虚拟项目部分的新出版物将于1月发行,而呼吁提交和虚拟项目截止日期现在将在6月和7月进行。
{"title":"<i>JMLA</i> virtual projects continue to show impact of technologies in health sciences libraries.","authors":"Emily Hurst","doi":"10.5195/jmla.2025.2102","DOIUrl":"10.5195/jmla.2025.2102","url":null,"abstract":"<p><p>Beginning in 2012, the Virtual Projects section of the <i>Journal of the Medical Library Association</i> has provided an opportunity for library leaders and technology experts to share with others how new technologies are being adopted by health sciences libraries. From educational purposes to online tools that enhance library services or access to resources, the Virtual Projects section brings technology use examples to the forefront. The new publication issue for future Virtual Projects sections will be January and the call for submissions and Virtual Projects deadline will now take place in June and July.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"85"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating a large language model's ability to answer clinicians' requests for evidence summaries. 评估大型语言模型回答临床医生对证据摘要要求的能力。
IF 2.9 4区 医学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Pub Date : 2025-01-14 DOI: 10.5195/jmla.2025.1985
Mallory N Blasingame, Taneya Y Koonce, Annette M Williams, Dario A Giuse, Jing Su, Poppy A Krump, Nunzia Bettinsoli Giuse

Objective: This study investigated the performance of a generative artificial intelligence (AI) tool using GPT-4 in answering clinical questions in comparison with medical librarians' gold-standard evidence syntheses.

Methods: Questions were extracted from an in-house database of clinical evidence requests previously answered by medical librarians. Questions with multiple parts were subdivided into individual topics. A standardized prompt was developed using the COSTAR framework. Librarians submitted each question into aiChat, an internally managed chat tool using GPT-4, and recorded the responses. The summaries generated by aiChat were evaluated on whether they contained the critical elements used in the established gold-standard summary of the librarian. A subset of questions was randomly selected for verification of references provided by aiChat.

Results: Of the 216 evaluated questions, aiChat's response was assessed as "correct" for 180 (83.3%) questions, "partially correct" for 35 (16.2%) questions, and "incorrect" for 1 (0.5%) question. No significant differences were observed in question ratings by question category (p=0.73). For a subset of 30% (n=66) of questions, 162 references were provided in the aiChat summaries, and 60 (37%) were confirmed as nonfabricated.

Conclusions: Overall, the performance of a generative AI tool was promising. However, many included references could not be independently verified, and attempts were not made to assess whether any additional concepts introduced by aiChat were factually accurate. Thus, we envision this being the first of a series of investigations designed to further our understanding of how current and future versions of generative AI can be used and integrated into medical librarians' workflow.

目的:本研究考察了使用GPT-4的生成式人工智能(AI)工具在回答临床问题方面的性能,并与医学图书馆员的金标准证据合成进行了比较。方法:从医学图书馆员先前回答的临床证据请求的内部数据库中提取问题。有多个部分的问题被细分为单独的主题。使用COSTAR框架开发了标准化提示。图书馆员将每个问题提交到aiChat,这是一个内部管理的使用GPT-4的聊天工具,并记录下回答。对aiChat生成的摘要进行评估,看它们是否包含在已建立的图书管理员黄金标准摘要中使用的关键元素。随机抽取问题子集对aiChat提供的参考文献进行验证。结果:在216个评估问题中,aiChat的回答被评估为180个(83.3%)问题“正确”,35个(16.2%)问题“部分正确”,1个(0.5%)问题“不正确”。不同问题类别的问题评分差异无统计学意义(p=0.73)。对于30% (n=66)的问题子集,在aiChat摘要中提供了162个参考文献,其中60个(37%)被确认为非捏造。结论:总体而言,生成式人工智能工具的性能是有希望的。然而,许多纳入的参考文献无法独立验证,并且没有尝试评估aiChat引入的任何其他概念是否在事实上准确。因此,我们设想这是一系列调查的第一个,旨在进一步了解当前和未来版本的生成式人工智能如何被使用并集成到医学图书馆员的工作流程中。
{"title":"Evaluating a large language model's ability to answer clinicians' requests for evidence summaries.","authors":"Mallory N Blasingame, Taneya Y Koonce, Annette M Williams, Dario A Giuse, Jing Su, Poppy A Krump, Nunzia Bettinsoli Giuse","doi":"10.5195/jmla.2025.1985","DOIUrl":"10.5195/jmla.2025.1985","url":null,"abstract":"<p><strong>Objective: </strong>This study investigated the performance of a generative artificial intelligence (AI) tool using GPT-4 in answering clinical questions in comparison with medical librarians' gold-standard evidence syntheses.</p><p><strong>Methods: </strong>Questions were extracted from an in-house database of clinical evidence requests previously answered by medical librarians. Questions with multiple parts were subdivided into individual topics. A standardized prompt was developed using the COSTAR framework. Librarians submitted each question into aiChat, an internally managed chat tool using GPT-4, and recorded the responses. The summaries generated by aiChat were evaluated on whether they contained the critical elements used in the established gold-standard summary of the librarian. A subset of questions was randomly selected for verification of references provided by aiChat.</p><p><strong>Results: </strong>Of the 216 evaluated questions, aiChat's response was assessed as \"correct\" for 180 (83.3%) questions, \"partially correct\" for 35 (16.2%) questions, and \"incorrect\" for 1 (0.5%) question. No significant differences were observed in question ratings by question category (p=0.73). For a subset of 30% (n=66) of questions, 162 references were provided in the aiChat summaries, and 60 (37%) were confirmed as nonfabricated.</p><p><strong>Conclusions: </strong>Overall, the performance of a generative AI tool was promising. However, many included references could not be independently verified, and attempts were not made to assess whether any additional concepts introduced by aiChat were factually accurate. Thus, we envision this being the first of a series of investigations designed to further our understanding of how current and future versions of generative AI can be used and integrated into medical librarians' workflow.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"65-77"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835037/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What's beyond the core? Database coverage in qualitative information retrieval. 核心之外是什么?定性信息检索中的数据库覆盖率。
IF 2.9 4区 医学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Pub Date : 2025-01-14 DOI: 10.5195/jmla.2025.1591
Jennifer Horton, David Kaunelis, Danielle Rabb, Andrea Smith

Objective: This study investigates the effectiveness of bibliographic databases to retrieve qualitative studies for use in systematic and rapid reviews in Health Technology Assessment (HTA) research. Qualitative research is becoming more prevalent in reviews and health technology assessment, but standardized search methodologies-particularly regarding database selection-are still in development.

Methods: To determine how commonly used databases (MEDLINE, CINAHL, PsycINFO, Scopus, and Web of Science) perform, a comprehensive list of relevant journal titles was compiled using InCites Journal Citation Reports and validated by qualitative researchers at Canada's Drug Agency (formerly CADTH). This list was used to evaluate the qualitative holdings of each database, by calculating the percentage of total titles held in each database, as well as the number of unique titles per database.

Results: While publications on qualitative search methodology generally recommend subject-specific health databases including MEDLINE, CINAHL, and PsycINFO, this study found that multidisciplinary citation indexes Scopus and Web of Science Core Collection not only had the highest percentages of total titles held, but also a higher number of unique titles.

Conclusions: These indexes have potential utility in qualitative search strategies, if only for supplementing other database searches with unique records. This potential was investigated via tests on qualitative rapid review search strategies translated to Scopus to determine how the index may contribute relevant literature.

目的:探讨书目数据库在卫生技术评价(HTA)研究中检索定性研究的有效性。定性研究在综述和卫生技术评估中变得越来越普遍,但是标准化的搜索方法——特别是关于数据库选择的方法——仍在发展中。方法:为了确定常用数据库(MEDLINE, CINAHL, PsycINFO, Scopus和Web of Science)的表现,使用InCites期刊引文报告编制了相关期刊标题的综合列表,并由加拿大药品管理局(前身为CADTH)的定性研究人员进行了验证。通过计算每个数据库中所持有的图书总数的百分比以及每个数据库中唯一图书的数量,该列表用于评估每个数据库的定性藏书。结果:虽然采用定性搜索方法的出版物通常推荐特定学科的健康数据库,包括MEDLINE、CINAHL和PsycINFO,但本研究发现,多学科引文索引Scopus和Web of Science Core Collection不仅拥有最高的总标题百分比,而且拥有更多的独特标题。结论:这些索引在定性搜索策略中具有潜在的效用,如果仅仅用于补充其他具有唯一记录的数据库搜索。通过对翻译到Scopus的定性快速回顾搜索策略的测试,研究了这种潜力,以确定该索引如何贡献相关文献。
{"title":"What's beyond the core? Database coverage in qualitative information retrieval.","authors":"Jennifer Horton, David Kaunelis, Danielle Rabb, Andrea Smith","doi":"10.5195/jmla.2025.1591","DOIUrl":"10.5195/jmla.2025.1591","url":null,"abstract":"<p><strong>Objective: </strong>This study investigates the effectiveness of bibliographic databases to retrieve qualitative studies for use in systematic and rapid reviews in Health Technology Assessment (HTA) research. Qualitative research is becoming more prevalent in reviews and health technology assessment, but standardized search methodologies-particularly regarding database selection-are still in development.</p><p><strong>Methods: </strong>To determine how commonly used databases (MEDLINE, CINAHL, PsycINFO, Scopus, and Web of Science) perform, a comprehensive list of relevant journal titles was compiled using InCites Journal Citation Reports and validated by qualitative researchers at Canada's Drug Agency (formerly CADTH). This list was used to evaluate the qualitative holdings of each database, by calculating the percentage of total titles held in each database, as well as the number of unique titles per database.</p><p><strong>Results: </strong>While publications on qualitative search methodology generally recommend subject-specific health databases including MEDLINE, CINAHL, and PsycINFO, this study found that multidisciplinary citation indexes Scopus and Web of Science Core Collection not only had the highest percentages of total titles held, but also a higher number of unique titles.</p><p><strong>Conclusions: </strong>These indexes have potential utility in qualitative search strategies, if only for supplementing other database searches with unique records. This potential was investigated via tests on qualitative rapid review search strategies translated to Scopus to determine how the index may contribute relevant literature.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"49-57"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835044/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated tools for systematic review screening methods: an application of machine learning for sexual orientation and gender identity measurement in health research. 用于系统审查筛选方法的自动化工具:机器学习在健康研究中的性取向和性别认同测量应用。
IF 2.9 4区 医学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Pub Date : 2025-01-14 DOI: 10.5195/jmla.2025.1860
Ashleigh J Rich, Emma L McGorray, Carrie Baldwin-SoRelle, Michelle Cawley, Karen Grigg, Lauren B Beach, Gregory Phillips, Tonia Poteat

Objective: Sexual and gender minority (SGM) populations experience health disparities compared to heterosexual and cisgender populations. The development of accurate, comprehensive sexual orientation and gender identity (SOGI) measures is fundamental to quantify and address SGM disparities, which first requires identifying SOGI-related research. As part of a larger project reviewing and synthesizing how SOGI has been assessed within the health literature, we provide an example of the application of automated tools for systematic reviews to the area of SOGI measurement.

Methods: In collaboration with research librarians, a three-phase approach was used to prioritize screening for a set of 11,441 SOGI measurement studies published since 2012. In Phase 1, search results were stratified into two groups (title with vs. without measurement-related terms); titles with measurement-related terms were manually screened. In Phase 2, supervised clustering using DoCTER software was used to sort the remaining studies based on relevance. In Phase 3, supervised machine learning using DoCTER was used to further identify which studies deemed low relevance in Phase 2 should be prioritized for manual screening.

Results: 1,607 studies were identified in Phase 1. Across Phases 2 and 3, the research team excluded 5,056 of the remaining 9,834 studies using DoCTER. In manual review, the percentage of relevant studies in results screened manually was low, ranging from 0.1 to 7.8 percent.

Conclusions: Automated tools used in collaboration with research librarians have the potential to save hundreds of hours of human labor in large-scale systematic reviews of SGM health research.

目的:与异性恋和顺性人群相比,性少数和性别少数群体(SGM)人群存在健康差异。制定准确、全面的性取向和性别认同(SOGI)指标是量化和解决性取向和性别认同差异的基础,这首先需要确定与SOGI相关的研究。作为审查和综合如何在健康文献中评估SOGI的更大项目的一部分,我们提供了一个应用自动化工具对SOGI测量领域进行系统审查的示例。方法:与研究型图书馆员合作,采用三阶段方法对2012年以来发表的11,441项SOGI测量研究进行优先筛选。在第一阶段,搜索结果被分成两组(标题中有与测量相关的术语和没有);带有测量相关术语的标题是手动筛选的。在第二阶段,使用DoCTER软件进行监督聚类,根据相关性对剩余的研究进行分类。在第3阶段,使用DoCTER进行监督机器学习,进一步确定哪些研究在第2阶段被认为相关性较低,应该优先进行人工筛选。结果:1期共纳入1607项研究。在第二和第三阶段,研究小组使用DoCTER排除了剩余的9834项研究中的5056项。在人工审查中,人工筛选的结果中相关研究的百分比很低,从0.1%到7.8%不等。结论:与研究图书馆员合作使用的自动化工具有可能在SGM健康研究的大规模系统评价中节省数百小时的人力劳动。
{"title":"Automated tools for systematic review screening methods: an application of machine learning for sexual orientation and gender identity measurement in health research.","authors":"Ashleigh J Rich, Emma L McGorray, Carrie Baldwin-SoRelle, Michelle Cawley, Karen Grigg, Lauren B Beach, Gregory Phillips, Tonia Poteat","doi":"10.5195/jmla.2025.1860","DOIUrl":"10.5195/jmla.2025.1860","url":null,"abstract":"<p><strong>Objective: </strong>Sexual and gender minority (SGM) populations experience health disparities compared to heterosexual and cisgender populations. The development of accurate, comprehensive sexual orientation and gender identity (SOGI) measures is fundamental to quantify and address SGM disparities, which first requires identifying SOGI-related research. As part of a larger project reviewing and synthesizing how SOGI has been assessed within the health literature, we provide an example of the application of automated tools for systematic reviews to the area of SOGI measurement.</p><p><strong>Methods: </strong>In collaboration with research librarians, a three-phase approach was used to prioritize screening for a set of 11,441 SOGI measurement studies published since 2012. In Phase 1, search results were stratified into two groups (title with vs. without measurement-related terms); titles with measurement-related terms were manually screened. In Phase 2, supervised clustering using DoCTER software was used to sort the remaining studies based on relevance. In Phase 3, supervised machine learning using DoCTER was used to further identify which studies deemed low relevance in Phase 2 should be prioritized for manual screening.</p><p><strong>Results: </strong>1,607 studies were identified in Phase 1. Across Phases 2 and 3, the research team excluded 5,056 of the remaining 9,834 studies using DoCTER. In manual review, the percentage of relevant studies in results screened manually was low, ranging from 0.1 to 7.8 percent.</p><p><strong>Conclusions: </strong>Automated tools used in collaboration with research librarians have the potential to save hundreds of hours of human labor in large-scale systematic reviews of SGM health research.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"31-38"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835039/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting JMLA case reports: a publication category for driving innovation in health sciences librarianship. 重新审视JMLA案例报告:推动健康科学图书馆创新的出版物类别。
IF 2.9 4区 医学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Pub Date : 2025-01-14 DOI: 10.5195/jmla.2025.2099
Jill T Boruff, Michelle Kraft, Alexander J Carroll

In the April 2019 issue (Vol. 106 No. 3), the Journal of the Medical Library Association (JMLA) debuted its Case Report publication category. In the years following this decision, the Case Reports category has grown into an integral component of JMLA. In this editorial, the JMLA Editorial Team highlights the value of case reports and outlines strategies authors can use to draft impactful manuscripts for this category.

在2019年4月号(第106卷第3期),《医学图书馆协会杂志》(JMLA)首次推出了病例报告出版物类别。在这个决定之后的几年里,案例报告类别已经成长为JMLA的一个组成部分。在这篇社论中,JMLA编辑团队强调了病例报告的价值,并概述了作者可以使用的策略,以便为这一类别起草有影响力的手稿。
{"title":"Revisiting <i>JMLA</i> case reports: a publication category for driving innovation in health sciences librarianship.","authors":"Jill T Boruff, Michelle Kraft, Alexander J Carroll","doi":"10.5195/jmla.2025.2099","DOIUrl":"10.5195/jmla.2025.2099","url":null,"abstract":"<p><p>In the April 2019 issue (Vol. 106 No. 3), the <i>Journal of the Medical Library Association (JMLA)</i> debuted its Case Report publication category. In the years following this decision, the Case Reports category has grown into an integral component of <i>JMLA</i>. In this editorial, the <i>JMLA</i> Editorial Team highlights the value of case reports and outlines strategies authors can use to draft impactful manuscripts for this category.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"1-3"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835043/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing for impact: a case study of UTHSC's research impact challenge. 影响设计:UTHSC研究影响挑战的案例研究。
IF 2.9 4区 医学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Pub Date : 2025-01-14 DOI: 10.5195/jmla.2025.2085
Jess Newman McDonald, Annabelle L Holt

Prompted by increasing requests for assistance with research evaluation from faculty researchers and university leadership, faculty librarians at the University of Tennessee Health Science Center (UTHSC) launched an innovative Research Impact Challenge in 2023. This Challenge was inspired by the University of Michigan's model and tailored to the needs of health sciences researchers. This asynchronous event aimed to empower early-career researchers and faculty seeking promotion and tenure by enhancing their online scholarly presence and understanding of how scholarship is tracked and evaluated. A team of diverse experts crafted an engaging learning experience through the strategic use of technology and design. Scribe slideshows and videos offered dynamic instruction, while written content and worksheets facilitated engagement and reflection. The Research Impact Challenge LibGuide, expertly designed with HTML and CSS, served as the central platform, ensuring intuitive navigation and easy access to resources (https://libguides.uthsc.edu/impactchallenge). User interface design prioritized simplicity and accessibility, accommodating diverse learning preferences and technical skills. This innovative project addressed common challenges faced by researchers and demonstrated the impactful use of technology in creating an adaptable and inclusive educational experience. The Research Impact Challenge exemplifies how academic libraries can harness technology to foster scholarly growth and support research impact in the health sciences.

田纳西大学健康科学中心(UTHSC)的教师图书馆员在2023年发起了一项创新的研究影响挑战,受到教师研究人员和大学领导层越来越多的研究评估援助请求的推动。这项挑战的灵感来自密歇根大学的模式,并根据健康科学研究人员的需求进行了调整。这个异步活动的目的是通过提高早期职业研究人员和教师的在线学术形象和对如何跟踪和评估奖学金的理解,来增强他们寻求晋升和终身职位的能力。一个由不同专家组成的团队通过战略性地使用技术和设计,精心打造了一个引人入胜的学习体验。Scribe的幻灯片和视频提供了动态的指导,而书面内容和工作表促进了参与和反思。研究影响挑战LibGuide,专业设计的HTML和CSS,作为中心平台,确保直观的导航和轻松访问资源(https://libguides.uthsc.edu/impactchallenge)。用户界面设计优先考虑简单性和可访问性,适应不同的学习偏好和技术技能。这个创新的项目解决了研究人员面临的共同挑战,并展示了在创造适应性和包容性教育体验方面有效使用技术。“研究影响挑战”举例说明了学术图书馆如何利用技术促进学术发展并支持健康科学领域的研究影响。
{"title":"Designing for impact: a case study of UTHSC's research impact challenge.","authors":"Jess Newman McDonald, Annabelle L Holt","doi":"10.5195/jmla.2025.2085","DOIUrl":"10.5195/jmla.2025.2085","url":null,"abstract":"<p><p>Prompted by increasing requests for assistance with research evaluation from faculty researchers and university leadership, faculty librarians at the University of Tennessee Health Science Center (UTHSC) launched an innovative Research Impact Challenge in 2023. This Challenge was inspired by the University of Michigan's model and tailored to the needs of health sciences researchers. This asynchronous event aimed to empower early-career researchers and faculty seeking promotion and tenure by enhancing their online scholarly presence and understanding of how scholarship is tracked and evaluated. A team of diverse experts crafted an engaging learning experience through the strategic use of technology and design. Scribe slideshows and videos offered dynamic instruction, while written content and worksheets facilitated engagement and reflection. The Research Impact Challenge LibGuide, expertly designed with HTML and CSS, served as the central platform, ensuring intuitive navigation and easy access to resources (https://libguides.uthsc.edu/impactchallenge). User interface design prioritized simplicity and accessibility, accommodating diverse learning preferences and technical skills. This innovative project addressed common challenges faced by researchers and demonstrated the impactful use of technology in creating an adaptable and inclusive educational experience. The Research Impact Challenge exemplifies how academic libraries can harness technology to foster scholarly growth and support research impact in the health sciences.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"90-91"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835040/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scoping review of librarian involvement in competency-based medical education. 基于能力的医学教育中图书馆员参与的范围审查。
IF 2.9 4区 医学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Pub Date : 2025-01-14 DOI: 10.5195/jmla.2025.1965
John W Cyrus, Laura Zeigen, Molly Knapp, Amy E Blevins, Brandon Patterson

Objective: A scoping review was undertaken to understand the extent of literature on librarian involvement in competency-based medical education (CBME).

Methods: We followed Joanna Briggs Institute methodology and PRISMA-ScR reporting guidelines. A search of peer-reviewed literature was conducted on December 31, 2022, in Medline, Embase, ERIC, CINAHL Complete, SCOPUS, LISS, LLIS, and LISTA. Studies were included if they described librarian involvement in the planning, delivery, or assessment of CBME in an LCME-accredited medical school and were published in English. Outcomes included characteristics of the inventions (duration, librarian role, content covered) and of the outcomes and measures (level on Kirkpatrick Model of Training Evaluation, direction of findings, measure used).

Results: Fifty studies were included of 11,051 screened: 46 empirical studies or program evaluations and four literature reviews. Studies were published in eight journals with two-thirds published after 2010. Duration of the intervention ranged from 30 minutes to a semester long. Librarians served as collaborators, leaders, curriculum designers, and evaluators. Studies primarily covered asking clinical questions and finding information and most often assessed reaction or learning outcomes.

Conclusions: A solid base of literature on librarian involvement in CBME exists; however, few studies measure user behavior or use validated outcomes measures. When librarians are communicating their value to stakeholders, having evidence for the contributions of librarians is essential. Existing publications may not capture the extent of work done in this area. Additional research is needed to quantify the impact of librarian involvement in competency-based medical education.

目的:本研究旨在了解图书馆员参与能力本位医学教育(CBME)的文献范围。方法:我们遵循Joanna Briggs研究所的方法和PRISMA-ScR报告指南。我们于2022年12月31日在Medline、Embase、ERIC、CINAHL Complete、SCOPUS、LISS、LLIS和LISTA中对同行评议文献进行了检索。如果研究描述图书管理员参与lcme认可的医学院CBME的计划、交付或评估,并以英文发表,则纳入研究。结果包括发明的特征(持续时间、图书馆员的角色、涵盖的内容)以及结果和测量的特征(柯克帕特里克培训评估模型的水平、发现的方向、使用的测量)。结果:共纳入研究50篇,共筛选11051篇,其中实证研究或项目评估46篇,文献综述4篇。研究发表在8个期刊上,其中三分之二是在2010年之后发表的。干预的持续时间从30分钟到一个学期不等。图书馆员扮演合作者、领导者、课程设计者和评估者的角色。研究主要包括询问临床问题和寻找信息,最常见的是评估反应或学习成果。结论:关于图书馆员参与CBME的文献基础较好;然而,很少有研究测量用户行为或使用有效的结果测量。当图书馆员向利益相关者传达他们的价值时,有证据证明图书馆员的贡献是必不可少的。现有出版物可能无法反映在这一领域所做的工作的程度。需要进一步的研究来量化图书馆员参与能力为基础的医学教育的影响。
{"title":"A scoping review of librarian involvement in competency-based medical education.","authors":"John W Cyrus, Laura Zeigen, Molly Knapp, Amy E Blevins, Brandon Patterson","doi":"10.5195/jmla.2025.1965","DOIUrl":"10.5195/jmla.2025.1965","url":null,"abstract":"<p><strong>Objective: </strong>A scoping review was undertaken to understand the extent of literature on librarian involvement in competency-based medical education (CBME).</p><p><strong>Methods: </strong>We followed Joanna Briggs Institute methodology and PRISMA-ScR reporting guidelines. A search of peer-reviewed literature was conducted on December 31, 2022, in Medline, Embase, ERIC, CINAHL Complete, SCOPUS, LISS, LLIS, and LISTA. Studies were included if they described librarian involvement in the planning, delivery, or assessment of CBME in an LCME-accredited medical school and were published in English. Outcomes included characteristics of the inventions (duration, librarian role, content covered) and of the outcomes and measures (level on Kirkpatrick Model of Training Evaluation, direction of findings, measure used).</p><p><strong>Results: </strong>Fifty studies were included of 11,051 screened: 46 empirical studies or program evaluations and four literature reviews. Studies were published in eight journals with two-thirds published after 2010. Duration of the intervention ranged from 30 minutes to a semester long. Librarians served as collaborators, leaders, curriculum designers, and evaluators. Studies primarily covered asking clinical questions and finding information and most often assessed reaction or learning outcomes.</p><p><strong>Conclusions: </strong>A solid base of literature on librarian involvement in CBME exists; however, few studies measure user behavior or use validated outcomes measures. When librarians are communicating their value to stakeholders, having evidence for the contributions of librarians is essential. Existing publications may not capture the extent of work done in this area. Additional research is needed to quantify the impact of librarian involvement in competency-based medical education.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"9-23"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835034/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Filtering failure: the impact of automated indexing in Medline on retrieval of human studies for knowledge synthesis. 过滤失败:Medline自动标引对知识合成人类研究检索的影响。
IF 2.9 4区 医学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Pub Date : 2025-01-14 DOI: 10.5195/jmla.2025.1972
Nicole Askin, Tyler Ostapyk, Carla Epp

Objective: Use of the search filter 'exp animals/not humans.sh' is a well-established method in evidence synthesis to exclude non-human studies. However, the shift to automated indexing of Medline records has raised concerns about the use of subject-heading-based search techniques. We sought to determine how often this string inappropriately excludes human studies among automated as compared to manually indexed records in Ovid Medline.

Methods: We searched Ovid Medline for studies published in 2021 and 2022 using the Cochrane Highly Sensitive Search Strategy for randomized trials. We identified all results excluded by the non-human-studies filter. Records were divided into sets based on indexing method: automated, curated, or manual. Each set was screened to identify human studies.

Results: Human studies were incorrectly excluded in all three conditions, but automated indexing inappropriately excluded human studies at nearly double the rate as manual indexing. In looking specifically at human clinical randomized controlled trials (RCTs), the rate of inappropriate exclusion of automated-indexing records was seven times that of manually-indexed records.

Conclusions: Given our findings, searchers are advised to carefully review the effect of the 'exp animals/not humans.sh' search filter on their search results, pending improvements to the automated indexing process.

目的:使用搜索过滤器“exp animals/not human .sh”是证据合成中排除非人类研究的一种行之有效的方法。然而,Medline记录向自动索引的转变引起了人们对使用基于主题标题的搜索技术的担忧。我们试图确定与Ovid Medline中手动索引的记录相比,该字符串在自动索引中不适当地排除人类研究的频率。方法:我们使用Cochrane随机试验高敏感搜索策略检索Ovid Medline在2021年和2022年发表的研究。我们确定了所有被非人类研究过滤器排除的结果。根据索引方法,将记录分为自动、策划或手动三种。每一组都经过筛选以确定人类研究。结果:在所有三种情况下,人类研究都被错误地排除,但自动索引不适当地排除人类研究的比率几乎是手动索引的两倍。在人类临床随机对照试验(rct)中,自动索引记录的不适当排除率是手动索引记录的7倍。结论:鉴于我们的发现,建议搜索者仔细审查“exp animals/not human .sh”搜索过滤器对搜索结果的影响,等待自动索引过程的改进。
{"title":"Filtering failure: the impact of automated indexing in Medline on retrieval of human studies for knowledge synthesis.","authors":"Nicole Askin, Tyler Ostapyk, Carla Epp","doi":"10.5195/jmla.2025.1972","DOIUrl":"10.5195/jmla.2025.1972","url":null,"abstract":"<p><strong>Objective: </strong>Use of the search filter 'exp animals/not humans.sh' is a well-established method in evidence synthesis to exclude non-human studies. However, the shift to automated indexing of Medline records has raised concerns about the use of subject-heading-based search techniques. We sought to determine how often this string inappropriately excludes human studies among automated as compared to manually indexed records in Ovid Medline.</p><p><strong>Methods: </strong>We searched Ovid Medline for studies published in 2021 and 2022 using the Cochrane Highly Sensitive Search Strategy for randomized trials. We identified all results excluded by the non-human-studies filter. Records were divided into sets based on indexing method: automated, curated, or manual. Each set was screened to identify human studies.</p><p><strong>Results: </strong>Human studies were incorrectly excluded in all three conditions, but automated indexing inappropriately excluded human studies at nearly double the rate as manual indexing. In looking specifically at human clinical randomized controlled trials (RCTs), the rate of inappropriate exclusion of automated-indexing records was seven times that of manually-indexed records.</p><p><strong>Conclusions: </strong>Given our findings, searchers are advised to carefully review the effect of the 'exp animals/not humans.sh' search filter on their search results, pending improvements to the automated indexing process.</p>","PeriodicalId":47690,"journal":{"name":"Journal of the Medical Library Association","volume":"113 1","pages":"58-64"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835038/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of the Medical Library Association
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1