Octavio Arriaga, Jichen Guo, Rebecca Adam, Sebastian Houben, Frank Kirchner
{"title":"用于少量概念学习的贝叶斯逆向图形","authors":"Octavio Arriaga, Jichen Guo, Rebecca Adam, Sebastian Houben, Frank Kirchner","doi":"arxiv-2409.08351","DOIUrl":null,"url":null,"abstract":"Humans excel at building generalizations of new concepts from just one single\nexample. Contrary to this, current computer vision models typically require\nlarge amount of training samples to achieve a comparable accuracy. In this work\nwe present a Bayesian model of perception that learns using only minimal data,\na prototypical probabilistic program of an object. Specifically, we propose a\ngenerative inverse graphics model of primitive shapes, to infer posterior\ndistributions over physically consistent parameters from one or several images.\nWe show how this representation can be used for downstream tasks such as\nfew-shot classification and pose estimation. Our model outperforms existing\nfew-shot neural-only classification algorithms and demonstrates generalization\nacross varying lighting conditions, backgrounds, and out-of-distribution\nshapes. By design, our model is uncertainty-aware and uses our new\ndifferentiable renderer for optimizing global scene parameters through gradient\ndescent, sampling posterior distributions over object parameters with Markov\nChain Monte Carlo (MCMC), and using a neural based likelihood function.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bayesian Inverse Graphics for Few-Shot Concept Learning\",\"authors\":\"Octavio Arriaga, Jichen Guo, Rebecca Adam, Sebastian Houben, Frank Kirchner\",\"doi\":\"arxiv-2409.08351\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Humans excel at building generalizations of new concepts from just one single\\nexample. Contrary to this, current computer vision models typically require\\nlarge amount of training samples to achieve a comparable accuracy. In this work\\nwe present a Bayesian model of perception that learns using only minimal data,\\na prototypical probabilistic program of an object. Specifically, we propose a\\ngenerative inverse graphics model of primitive shapes, to infer posterior\\ndistributions over physically consistent parameters from one or several images.\\nWe show how this representation can be used for downstream tasks such as\\nfew-shot classification and pose estimation. Our model outperforms existing\\nfew-shot neural-only classification algorithms and demonstrates generalization\\nacross varying lighting conditions, backgrounds, and out-of-distribution\\nshapes. By design, our model is uncertainty-aware and uses our new\\ndifferentiable renderer for optimizing global scene parameters through gradient\\ndescent, sampling posterior distributions over object parameters with Markov\\nChain Monte Carlo (MCMC), and using a neural based likelihood function.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08351\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08351","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Bayesian Inverse Graphics for Few-Shot Concept Learning
Humans excel at building generalizations of new concepts from just one single
example. Contrary to this, current computer vision models typically require
large amount of training samples to achieve a comparable accuracy. In this work
we present a Bayesian model of perception that learns using only minimal data,
a prototypical probabilistic program of an object. Specifically, we propose a
generative inverse graphics model of primitive shapes, to infer posterior
distributions over physically consistent parameters from one or several images.
We show how this representation can be used for downstream tasks such as
few-shot classification and pose estimation. Our model outperforms existing
few-shot neural-only classification algorithms and demonstrates generalization
across varying lighting conditions, backgrounds, and out-of-distribution
shapes. By design, our model is uncertainty-aware and uses our new
differentiable renderer for optimizing global scene parameters through gradient
descent, sampling posterior distributions over object parameters with Markov
Chain Monte Carlo (MCMC), and using a neural based likelihood function.