Ying-Jia Lin, Daniel Tan, Tzu-Hsuan Chou, Hung-Yu kao, Hsin-Yang Wang
{"title":"基于抽取摘要的知识蒸馏","authors":"Ying-Jia Lin, Daniel Tan, Tzu-Hsuan Chou, Hung-Yu kao, Hsin-Yang Wang","doi":"10.1109/AIKE48582.2020.00019","DOIUrl":null,"url":null,"abstract":"Large-scale pre-trained frameworks have shown state-of-the-art performance in several natural language processing tasks. However, the costly training and inference time are great challenges when deploying such models to real-world applications. In this work, we conduct an empirical study of knowledge distillation on an extractive text summarization task. We first utilized a pre-trained model as the teacher model for extractive summarization and extracted learned knowledge from it as soft targets. Then, we leveraged both the hard targets and the soft targets as the objective for training a much smaller student model to perform extractive summarization. Our results show the student model performs only 1 point lower in the three ROUGE scores on the CNN/DM dataset of extractive summarization while being 40% smaller than the teacher model and 50% faster in terms of the inference time.","PeriodicalId":370671,"journal":{"name":"2020 IEEE Third International Conference on Artificial Intelligence and Knowledge Engineering (AIKE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Knowledge Distillation on Extractive Summarization\",\"authors\":\"Ying-Jia Lin, Daniel Tan, Tzu-Hsuan Chou, Hung-Yu kao, Hsin-Yang Wang\",\"doi\":\"10.1109/AIKE48582.2020.00019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large-scale pre-trained frameworks have shown state-of-the-art performance in several natural language processing tasks. However, the costly training and inference time are great challenges when deploying such models to real-world applications. In this work, we conduct an empirical study of knowledge distillation on an extractive text summarization task. We first utilized a pre-trained model as the teacher model for extractive summarization and extracted learned knowledge from it as soft targets. Then, we leveraged both the hard targets and the soft targets as the objective for training a much smaller student model to perform extractive summarization. Our results show the student model performs only 1 point lower in the three ROUGE scores on the CNN/DM dataset of extractive summarization while being 40% smaller than the teacher model and 50% faster in terms of the inference time.\",\"PeriodicalId\":370671,\"journal\":{\"name\":\"2020 IEEE Third International Conference on Artificial Intelligence and Knowledge Engineering (AIKE)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Third International Conference on Artificial Intelligence and Knowledge Engineering (AIKE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIKE48582.2020.00019\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Third International Conference on Artificial Intelligence and Knowledge Engineering (AIKE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIKE48582.2020.00019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Knowledge Distillation on Extractive Summarization
Large-scale pre-trained frameworks have shown state-of-the-art performance in several natural language processing tasks. However, the costly training and inference time are great challenges when deploying such models to real-world applications. In this work, we conduct an empirical study of knowledge distillation on an extractive text summarization task. We first utilized a pre-trained model as the teacher model for extractive summarization and extracted learned knowledge from it as soft targets. Then, we leveraged both the hard targets and the soft targets as the objective for training a much smaller student model to perform extractive summarization. Our results show the student model performs only 1 point lower in the three ROUGE scores on the CNN/DM dataset of extractive summarization while being 40% smaller than the teacher model and 50% faster in terms of the inference time.