Background: Large language models (LLM) can automatically process clinical free-text documents, extract key information, and thereby reduce reading effort and documentation-related workload. High-quality data and targeted model control are essential for practical applicability.
Material and methods: Various approaches to information extraction are presented. Additionally, 24 unstructured pathological reports of bone and soft tissue tumors are processed using the local, generic LLM Llama 4 Scout with three different prompt variants and compared in terms of extraction quality.
Results: Prompt design had a substantial impact on model behavior. Prompts with clear parameter definitions and examples achieved the most reliable results. Typical LLM-specific errors, such as hallucinations and misclassifications, were also observed. LLM can support clinical staff by rapidly and systematically extracting relevant content from free-text documents. Safe and effective use requires high-quality data, precise inputs, and close collaboration between medical and technical experts.
扫码关注我们
求助内容:
应助结果提醒方式:
