{"title":"OQA:正畸文献问题解答数据集","authors":"Maxime Rousseau, Amal Zouaq, Nelly Huynh","doi":"10.1101/2024.07.05.24309412","DOIUrl":null,"url":null,"abstract":"Background: The near-exponential increase in the number of publications in orthodontics poses a challenge for efficient literature appraisal and evidence-based practice. Language models (LM) have the potential, through their question-answering fine-tuning, to assist clinicians and researchers in critical appraisal of scientific information and thus to improve decision-making.\nMethods: This paper introduces OrthodonticQA (OQA), the first question-answering dataset in the field of dentistry which is made publicly available under a permissive license. A framework is proposed which includes utilization of PICO information and templates for question formulation, demonstrating their broader applicability across various specialties within dentistry and healthcare. A selection of transformer LMs were trained on OQA to set performance baselines.\nResults: The best model achieved a mean F1 score of 77.61 (SD 0.26) and a score of 100/114 (87.72\\%) on human evaluation. Furthermore, when exploring performance according to grouped subtopics within the field of orthodontics, it was found that for all LMs the performance can vary considerably across topics.\nConclusion: Our findings highlight the importance of subtopic evaluation and superior performance of paired domain specific model and tokenizer.","PeriodicalId":501363,"journal":{"name":"medRxiv - Dentistry and Oral Medicine","volume":"366 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"OQA: A question-answering dataset on orthodontic literature\",\"authors\":\"Maxime Rousseau, Amal Zouaq, Nelly Huynh\",\"doi\":\"10.1101/2024.07.05.24309412\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background: The near-exponential increase in the number of publications in orthodontics poses a challenge for efficient literature appraisal and evidence-based practice. Language models (LM) have the potential, through their question-answering fine-tuning, to assist clinicians and researchers in critical appraisal of scientific information and thus to improve decision-making.\\nMethods: This paper introduces OrthodonticQA (OQA), the first question-answering dataset in the field of dentistry which is made publicly available under a permissive license. A framework is proposed which includes utilization of PICO information and templates for question formulation, demonstrating their broader applicability across various specialties within dentistry and healthcare. A selection of transformer LMs were trained on OQA to set performance baselines.\\nResults: The best model achieved a mean F1 score of 77.61 (SD 0.26) and a score of 100/114 (87.72\\\\%) on human evaluation. Furthermore, when exploring performance according to grouped subtopics within the field of orthodontics, it was found that for all LMs the performance can vary considerably across topics.\\nConclusion: Our findings highlight the importance of subtopic evaluation and superior performance of paired domain specific model and tokenizer.\",\"PeriodicalId\":501363,\"journal\":{\"name\":\"medRxiv - Dentistry and Oral Medicine\",\"volume\":\"366 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"medRxiv - Dentistry and Oral Medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.07.05.24309412\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Dentistry and Oral Medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.07.05.24309412","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
OQA: A question-answering dataset on orthodontic literature
Background: The near-exponential increase in the number of publications in orthodontics poses a challenge for efficient literature appraisal and evidence-based practice. Language models (LM) have the potential, through their question-answering fine-tuning, to assist clinicians and researchers in critical appraisal of scientific information and thus to improve decision-making.
Methods: This paper introduces OrthodonticQA (OQA), the first question-answering dataset in the field of dentistry which is made publicly available under a permissive license. A framework is proposed which includes utilization of PICO information and templates for question formulation, demonstrating their broader applicability across various specialties within dentistry and healthcare. A selection of transformer LMs were trained on OQA to set performance baselines.
Results: The best model achieved a mean F1 score of 77.61 (SD 0.26) and a score of 100/114 (87.72\%) on human evaluation. Furthermore, when exploring performance according to grouped subtopics within the field of orthodontics, it was found that for all LMs the performance can vary considerably across topics.
Conclusion: Our findings highlight the importance of subtopic evaluation and superior performance of paired domain specific model and tokenizer.