{"title":"对比自监督学习:不同架构的调查","authors":"Adnan Khan, S. Albarri, Muhammad Arslan Manzoor","doi":"10.1109/ICAI55435.2022.9773725","DOIUrl":null,"url":null,"abstract":"Self-Supervised Learning (SSL) has enhanced the learning process of semantic representations from images. SSL has reduced the need for annotating or labelling the data by relying less on class labels during the training phase. SSL techniques dependent on Constrative Learning (CL) are acquiring prevalence because of their low dependency on training data labels. Different CL methods are producing state-of-the-art results on datasets which are used as the benchmarks for Supervised Learning. In this survey, we provide a review of CL-based methods including SimCLR, MoCo, BYOL, SwAV, SimTriplet and SimSiam. We compare these pipelines in terms of their accuracy on ImageNet and VOC07 benchmark. BYOL propose basic yet powerful architecture to accomplish 74.30 % accuracy score on image classification task. Using clustering approach SwAV outperforms other architectures by achieving 75.30 % top-1 ImageNet classification accuracy. In addition, we shed light on the importance of CL approaches which can maximise the use of huge amounts of data available today. At last, we report the impediments of current CL methodologies and emphasize the need of computationally efficient CL pipelines.","PeriodicalId":146842,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence (ICAI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Contrastive Self-Supervised Learning: A Survey on Different Architectures\",\"authors\":\"Adnan Khan, S. Albarri, Muhammad Arslan Manzoor\",\"doi\":\"10.1109/ICAI55435.2022.9773725\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Self-Supervised Learning (SSL) has enhanced the learning process of semantic representations from images. SSL has reduced the need for annotating or labelling the data by relying less on class labels during the training phase. SSL techniques dependent on Constrative Learning (CL) are acquiring prevalence because of their low dependency on training data labels. Different CL methods are producing state-of-the-art results on datasets which are used as the benchmarks for Supervised Learning. In this survey, we provide a review of CL-based methods including SimCLR, MoCo, BYOL, SwAV, SimTriplet and SimSiam. We compare these pipelines in terms of their accuracy on ImageNet and VOC07 benchmark. BYOL propose basic yet powerful architecture to accomplish 74.30 % accuracy score on image classification task. Using clustering approach SwAV outperforms other architectures by achieving 75.30 % top-1 ImageNet classification accuracy. In addition, we shed light on the importance of CL approaches which can maximise the use of huge amounts of data available today. At last, we report the impediments of current CL methodologies and emphasize the need of computationally efficient CL pipelines.\",\"PeriodicalId\":146842,\"journal\":{\"name\":\"2022 2nd International Conference on Artificial Intelligence (ICAI)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 2nd International Conference on Artificial Intelligence (ICAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAI55435.2022.9773725\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Artificial Intelligence (ICAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAI55435.2022.9773725","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Contrastive Self-Supervised Learning: A Survey on Different Architectures
Self-Supervised Learning (SSL) has enhanced the learning process of semantic representations from images. SSL has reduced the need for annotating or labelling the data by relying less on class labels during the training phase. SSL techniques dependent on Constrative Learning (CL) are acquiring prevalence because of their low dependency on training data labels. Different CL methods are producing state-of-the-art results on datasets which are used as the benchmarks for Supervised Learning. In this survey, we provide a review of CL-based methods including SimCLR, MoCo, BYOL, SwAV, SimTriplet and SimSiam. We compare these pipelines in terms of their accuracy on ImageNet and VOC07 benchmark. BYOL propose basic yet powerful architecture to accomplish 74.30 % accuracy score on image classification task. Using clustering approach SwAV outperforms other architectures by achieving 75.30 % top-1 ImageNet classification accuracy. In addition, we shed light on the importance of CL approaches which can maximise the use of huge amounts of data available today. At last, we report the impediments of current CL methodologies and emphasize the need of computationally efficient CL pipelines.