David Bani-Harouni, Nassir Navab, Matthias Keicher
{"title":"MAGDA:多代理指南驱动的诊断协助","authors":"David Bani-Harouni, Nassir Navab, Matthias Keicher","doi":"arxiv-2409.06351","DOIUrl":null,"url":null,"abstract":"In emergency departments, rural hospitals, or clinics in less developed\nregions, clinicians often lack fast image analysis by trained radiologists,\nwhich can have a detrimental effect on patients' healthcare. Large Language\nModels (LLMs) have the potential to alleviate some pressure from these\nclinicians by providing insights that can help them in their decision-making.\nWhile these LLMs achieve high test results on medical exams showcasing their\ngreat theoretical medical knowledge, they tend not to follow medical\nguidelines. In this work, we introduce a new approach for zero-shot\nguideline-driven decision support. We model a system of multiple LLM agents\naugmented with a contrastive vision-language model that collaborate to reach a\npatient diagnosis. After providing the agents with simple diagnostic\nguidelines, they will synthesize prompts and screen the image for findings\nfollowing these guidelines. Finally, they provide understandable\nchain-of-thought reasoning for their diagnosis, which is then self-refined to\nconsider inter-dependencies between diseases. As our method is zero-shot, it is\nadaptable to settings with rare diseases, where training data is limited, but\nexpert-crafted disease descriptions are available. We evaluate our method on\ntwo chest X-ray datasets, CheXpert and ChestX-ray 14 Longtail, showcasing\nperformance improvement over existing zero-shot methods and generalizability to\nrare diseases.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"69 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MAGDA: Multi-agent guideline-driven diagnostic assistance\",\"authors\":\"David Bani-Harouni, Nassir Navab, Matthias Keicher\",\"doi\":\"arxiv-2409.06351\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In emergency departments, rural hospitals, or clinics in less developed\\nregions, clinicians often lack fast image analysis by trained radiologists,\\nwhich can have a detrimental effect on patients' healthcare. Large Language\\nModels (LLMs) have the potential to alleviate some pressure from these\\nclinicians by providing insights that can help them in their decision-making.\\nWhile these LLMs achieve high test results on medical exams showcasing their\\ngreat theoretical medical knowledge, they tend not to follow medical\\nguidelines. In this work, we introduce a new approach for zero-shot\\nguideline-driven decision support. We model a system of multiple LLM agents\\naugmented with a contrastive vision-language model that collaborate to reach a\\npatient diagnosis. After providing the agents with simple diagnostic\\nguidelines, they will synthesize prompts and screen the image for findings\\nfollowing these guidelines. Finally, they provide understandable\\nchain-of-thought reasoning for their diagnosis, which is then self-refined to\\nconsider inter-dependencies between diseases. As our method is zero-shot, it is\\nadaptable to settings with rare diseases, where training data is limited, but\\nexpert-crafted disease descriptions are available. We evaluate our method on\\ntwo chest X-ray datasets, CheXpert and ChestX-ray 14 Longtail, showcasing\\nperformance improvement over existing zero-shot methods and generalizability to\\nrare diseases.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"69 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06351\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06351","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In emergency departments, rural hospitals, or clinics in less developed
regions, clinicians often lack fast image analysis by trained radiologists,
which can have a detrimental effect on patients' healthcare. Large Language
Models (LLMs) have the potential to alleviate some pressure from these
clinicians by providing insights that can help them in their decision-making.
While these LLMs achieve high test results on medical exams showcasing their
great theoretical medical knowledge, they tend not to follow medical
guidelines. In this work, we introduce a new approach for zero-shot
guideline-driven decision support. We model a system of multiple LLM agents
augmented with a contrastive vision-language model that collaborate to reach a
patient diagnosis. After providing the agents with simple diagnostic
guidelines, they will synthesize prompts and screen the image for findings
following these guidelines. Finally, they provide understandable
chain-of-thought reasoning for their diagnosis, which is then self-refined to
consider inter-dependencies between diseases. As our method is zero-shot, it is
adaptable to settings with rare diseases, where training data is limited, but
expert-crafted disease descriptions are available. We evaluate our method on
two chest X-ray datasets, CheXpert and ChestX-ray 14 Longtail, showcasing
performance improvement over existing zero-shot methods and generalizability to
rare diseases.