Hyoungseob Park, Minki Jeong, Youngeun Kim, Changick Kim
{"title":"基于相似性参考的图神经网络鲁棒训练","authors":"Hyoungseob Park, Minki Jeong, Youngeun Kim, Changick Kim","doi":"10.1109/ICIP40778.2020.9191054","DOIUrl":null,"url":null,"abstract":"Filtering noisy labels is crucial for robust training of deep neural networks. To train networks with noisy labels, sampling methods have been introduced, which sample the reliable instances to update networks using only sampled data. Since they rarely employ the non-sampled data for training, these methods have a fundamental limitation that they reduce the amount of the training data. To alleviate this problem, our approach aims to fully utilize the whole dataset by leveraging the information of the sampled data. To this end, we propose a novel graph-based learning framework that enables networks to propagate the label information of the sampled data to adjacent data, whether they are sampled or not. Also, we propose a novel self-training strategy to utilize the non-sampled data without labels and to regularize the network update using the information of the sampled data. Our method outperforms state-of-the-art sampling methods.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-Training Of Graph Neural Networks Using Similarity Reference For Robust Training With Noisy Labels\",\"authors\":\"Hyoungseob Park, Minki Jeong, Youngeun Kim, Changick Kim\",\"doi\":\"10.1109/ICIP40778.2020.9191054\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Filtering noisy labels is crucial for robust training of deep neural networks. To train networks with noisy labels, sampling methods have been introduced, which sample the reliable instances to update networks using only sampled data. Since they rarely employ the non-sampled data for training, these methods have a fundamental limitation that they reduce the amount of the training data. To alleviate this problem, our approach aims to fully utilize the whole dataset by leveraging the information of the sampled data. To this end, we propose a novel graph-based learning framework that enables networks to propagate the label information of the sampled data to adjacent data, whether they are sampled or not. Also, we propose a novel self-training strategy to utilize the non-sampled data without labels and to regularize the network update using the information of the sampled data. Our method outperforms state-of-the-art sampling methods.\",\"PeriodicalId\":405734,\"journal\":{\"name\":\"2020 IEEE International Conference on Image Processing (ICIP)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Image Processing (ICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP40778.2020.9191054\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP40778.2020.9191054","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Self-Training Of Graph Neural Networks Using Similarity Reference For Robust Training With Noisy Labels
Filtering noisy labels is crucial for robust training of deep neural networks. To train networks with noisy labels, sampling methods have been introduced, which sample the reliable instances to update networks using only sampled data. Since they rarely employ the non-sampled data for training, these methods have a fundamental limitation that they reduce the amount of the training data. To alleviate this problem, our approach aims to fully utilize the whole dataset by leveraging the information of the sampled data. To this end, we propose a novel graph-based learning framework that enables networks to propagate the label information of the sampled data to adjacent data, whether they are sampled or not. Also, we propose a novel self-training strategy to utilize the non-sampled data without labels and to regularize the network update using the information of the sampled data. Our method outperforms state-of-the-art sampling methods.