Michelangelo Diligenti, Soumali Roychowdhury, M. Gori
{"title":"将先验知识融入深度学习","authors":"Michelangelo Diligenti, Soumali Roychowdhury, M. Gori","doi":"10.1109/ICMLA.2017.00-37","DOIUrl":null,"url":null,"abstract":"Deep learning allows to develop feature representations and train classification models in a fully integrated way. However, learning deep networks is quite hard and it improves over shallow architectures only if a large number of training data is available. Injecting prior knowledge into the learner is a principled way to reduce the amount of required training data, as the learner does not need to induce the knowledge from the data itself. In this paper we propose a general and principled way to integrate prior knowledge when training deep networks. Semantic Based Regularization (SBR) is used as underlying framework to represent the prior knowledge, expressed as a collection of first-order logic clauses (FOL), and where each task to be learned corresponds to a predicate in the knowledge base. The knowledge base correlates the tasks to be learned and it is translated into a set of constraints which are integrated into the learning process via backpropagation. The experimental results show how the integration of the prior knowledge boosts the accuracy of a state-of-the-art deep network on an image classification task.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"44 1","pages":"920-923"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"68","resultStr":"{\"title\":\"Integrating Prior Knowledge into Deep Learning\",\"authors\":\"Michelangelo Diligenti, Soumali Roychowdhury, M. Gori\",\"doi\":\"10.1109/ICMLA.2017.00-37\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning allows to develop feature representations and train classification models in a fully integrated way. However, learning deep networks is quite hard and it improves over shallow architectures only if a large number of training data is available. Injecting prior knowledge into the learner is a principled way to reduce the amount of required training data, as the learner does not need to induce the knowledge from the data itself. In this paper we propose a general and principled way to integrate prior knowledge when training deep networks. Semantic Based Regularization (SBR) is used as underlying framework to represent the prior knowledge, expressed as a collection of first-order logic clauses (FOL), and where each task to be learned corresponds to a predicate in the knowledge base. The knowledge base correlates the tasks to be learned and it is translated into a set of constraints which are integrated into the learning process via backpropagation. The experimental results show how the integration of the prior knowledge boosts the accuracy of a state-of-the-art deep network on an image classification task.\",\"PeriodicalId\":6636,\"journal\":{\"name\":\"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"volume\":\"44 1\",\"pages\":\"920-923\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"68\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA.2017.00-37\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2017.00-37","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 68
摘要
深度学习允许以完全集成的方式开发特征表示和训练分类模型。然而,学习深度网络是相当困难的,只有当有大量的训练数据可用时,它才会比浅层架构有所改进。向学习器中注入先验知识是减少所需训练数据量的一种原则性方法,因为学习器不需要从数据本身中导出知识。在本文中,我们提出了一种在训练深度网络时整合先验知识的通用和原则性方法。基于语义的正则化(Semantic Based Regularization, SBR)被用作表示先验知识的底层框架,表示为一阶逻辑子句(FOL)的集合,其中每个要学习的任务对应于知识库中的谓词。知识库与要学习的任务相关联,并将其转化为一组约束,这些约束通过反向传播集成到学习过程中。实验结果表明,先验知识的集成提高了最先进的深度网络在图像分类任务上的准确性。
Deep learning allows to develop feature representations and train classification models in a fully integrated way. However, learning deep networks is quite hard and it improves over shallow architectures only if a large number of training data is available. Injecting prior knowledge into the learner is a principled way to reduce the amount of required training data, as the learner does not need to induce the knowledge from the data itself. In this paper we propose a general and principled way to integrate prior knowledge when training deep networks. Semantic Based Regularization (SBR) is used as underlying framework to represent the prior knowledge, expressed as a collection of first-order logic clauses (FOL), and where each task to be learned corresponds to a predicate in the knowledge base. The knowledge base correlates the tasks to be learned and it is translated into a set of constraints which are integrated into the learning process via backpropagation. The experimental results show how the integration of the prior knowledge boosts the accuracy of a state-of-the-art deep network on an image classification task.