Venkata Duvvuri, Gahyoung Lee, Yuwei Hsu, Asha Makwana, C. Morgan
{"title":"基于机器学习的测试自动生成项目的后处理选择","authors":"Venkata Duvvuri, Gahyoung Lee, Yuwei Hsu, Asha Makwana, C. Morgan","doi":"10.1145/3583788.3583800","DOIUrl":null,"url":null,"abstract":"Automatic Item Generation (AIG) is increasingly used to process large amounts of information and scale the demand for computerized testing. Recent work in Artificial Intelligence for AIG (aka Natural Question Generation-NQG), states that even newer AIG techniques are short in syntactic, semantic, and contextual relevance when evaluated qualitatively on small datasets. We confirm this deficiency quantitatively over large datasets. Additionally, we find that human evaluation by Subject Matter Experts (SMEs) conservatively rejects at least ∼9% portion of AI test questions in our experiment over large diverse dataset topics. Here we present an analytical study of these differences, and this motivates our two-phased post-processing AI daisy chain machine learning (ML) architecture for selection and editing of AI generated questions using current techniques. Finally, we identify and propose the first selection step in the daisy chain using ML with 97+% accuracy, and provide analytical guidance for development of the second editing step with a measured lower bound on a BLEU score improvement of 2.4+%.","PeriodicalId":292167,"journal":{"name":"Proceedings of the 2023 7th International Conference on Machine Learning and Soft Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Post Processing Selection of Automatic Item Generation in Testing to Ensure Human-Like Quality with Machine Learning\",\"authors\":\"Venkata Duvvuri, Gahyoung Lee, Yuwei Hsu, Asha Makwana, C. Morgan\",\"doi\":\"10.1145/3583788.3583800\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automatic Item Generation (AIG) is increasingly used to process large amounts of information and scale the demand for computerized testing. Recent work in Artificial Intelligence for AIG (aka Natural Question Generation-NQG), states that even newer AIG techniques are short in syntactic, semantic, and contextual relevance when evaluated qualitatively on small datasets. We confirm this deficiency quantitatively over large datasets. Additionally, we find that human evaluation by Subject Matter Experts (SMEs) conservatively rejects at least ∼9% portion of AI test questions in our experiment over large diverse dataset topics. Here we present an analytical study of these differences, and this motivates our two-phased post-processing AI daisy chain machine learning (ML) architecture for selection and editing of AI generated questions using current techniques. Finally, we identify and propose the first selection step in the daisy chain using ML with 97+% accuracy, and provide analytical guidance for development of the second editing step with a measured lower bound on a BLEU score improvement of 2.4+%.\",\"PeriodicalId\":292167,\"journal\":{\"name\":\"Proceedings of the 2023 7th International Conference on Machine Learning and Soft Computing\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 7th International Conference on Machine Learning and Soft Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3583788.3583800\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 7th International Conference on Machine Learning and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3583788.3583800","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Post Processing Selection of Automatic Item Generation in Testing to Ensure Human-Like Quality with Machine Learning
Automatic Item Generation (AIG) is increasingly used to process large amounts of information and scale the demand for computerized testing. Recent work in Artificial Intelligence for AIG (aka Natural Question Generation-NQG), states that even newer AIG techniques are short in syntactic, semantic, and contextual relevance when evaluated qualitatively on small datasets. We confirm this deficiency quantitatively over large datasets. Additionally, we find that human evaluation by Subject Matter Experts (SMEs) conservatively rejects at least ∼9% portion of AI test questions in our experiment over large diverse dataset topics. Here we present an analytical study of these differences, and this motivates our two-phased post-processing AI daisy chain machine learning (ML) architecture for selection and editing of AI generated questions using current techniques. Finally, we identify and propose the first selection step in the daisy chain using ML with 97+% accuracy, and provide analytical guidance for development of the second editing step with a measured lower bound on a BLEU score improvement of 2.4+%.