Background
Named entity recognition (NER) is critical in natural language processing (NLP), particularly in the medical field, where accurate identification of entities, such as patient information and clinical events, is essential. Traditional NER approaches rely heavily on large, annotated corpora, which are resource intensive. Large language models (LLMs) offer new NER approaches, particularly through in-context and few-shot learning.
Objective
This study investigates the effects of incorporating annotation guidelines into prompts for NER via LLMs, with a specific focus on their impact on few-shot learning performance across various medical corpora.
Methods
We designed eight different prompt patterns, combining few-shot examples with annotation guidelines of varying complexity, and evaluated their performance via three prominent LLMs: GPT-4o, Claude 3.5 Sonnet, and gpt-oss-120b. Additionally, we employed three diverse medical corpora: i2b2-2014, i2b2-2012, and MedTxt-CR. Accuracy was assessed via precision, recall, and the F1 score, with evaluation methods aligned with those used in relevant shared tasks to ensure the comparability of the results.
Results
Our findings indicate that adding detailed annotation guidelines to few-shot prompts improves the recall and F1 score in most cases.
Conclusion
Including annotation guidelines in prompts enhances the performance of LLMs in NER tasks, making this a practical approach for developing accurate NLP systems in resource-constrained environments. Although annotation guidelines are essential for evaluation and example creation, their integration into LLM prompts can further optimize few-shot learning, especially within specialized domains such as medical NLP.
扫码关注我们
求助内容:
应助结果提醒方式:
