{"title":"有限训练数据下的面部图像生成","authors":"Ethan Bevan, Jason Rafe Miller","doi":"10.55632/pwvas.v95i2.973","DOIUrl":null,"url":null,"abstract":"Deep learning models have a wide number of applications including generating realistic-looking images. These models typically require lots of data, but we wanted to explore how much quality is sacrificed by using smaller amounts of data. We built several models and trained them at different dataset sizes, then we assessed the quality of the generated images with the widely used FID measure. As expected, we measured an inverse correlation of -0.7 between image quality and training set size. However, we observed that the small-training-set results had problems not detectable by this experiment. We therefore present an experimental design for a follow-up study that would further explore the lower limits of training set size. These experiments are important for bringing us closer to understanding how much data is needed to train a successful generative model.","PeriodicalId":92280,"journal":{"name":"Proceedings of the West Virginia Academy of Science","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Facial Image Generation with Limited Training Data\",\"authors\":\"Ethan Bevan, Jason Rafe Miller\",\"doi\":\"10.55632/pwvas.v95i2.973\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning models have a wide number of applications including generating realistic-looking images. These models typically require lots of data, but we wanted to explore how much quality is sacrificed by using smaller amounts of data. We built several models and trained them at different dataset sizes, then we assessed the quality of the generated images with the widely used FID measure. As expected, we measured an inverse correlation of -0.7 between image quality and training set size. However, we observed that the small-training-set results had problems not detectable by this experiment. We therefore present an experimental design for a follow-up study that would further explore the lower limits of training set size. These experiments are important for bringing us closer to understanding how much data is needed to train a successful generative model.\",\"PeriodicalId\":92280,\"journal\":{\"name\":\"Proceedings of the West Virginia Academy of Science\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the West Virginia Academy of Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.55632/pwvas.v95i2.973\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the West Virginia Academy of Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.55632/pwvas.v95i2.973","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Facial Image Generation with Limited Training Data
Deep learning models have a wide number of applications including generating realistic-looking images. These models typically require lots of data, but we wanted to explore how much quality is sacrificed by using smaller amounts of data. We built several models and trained them at different dataset sizes, then we assessed the quality of the generated images with the widely used FID measure. As expected, we measured an inverse correlation of -0.7 between image quality and training set size. However, we observed that the small-training-set results had problems not detectable by this experiment. We therefore present an experimental design for a follow-up study that would further explore the lower limits of training set size. These experiments are important for bringing us closer to understanding how much data is needed to train a successful generative model.