{"title":"学习树结构近似的条件随机场","authors":"A. Skurikhin","doi":"10.1109/AIPR.2014.7041937","DOIUrl":null,"url":null,"abstract":"Exact probabilistic inference is computationally intractable in general probabilistic graph-based models, such as Markov Random Fields and Conditional Random Fields (CRFs). We investigate spanning tree approximations for the discriminative CRF model. We decompose the original computationally intractable grid-structured CRF model containing many cycles into a set of tractable sub-models using a set of spanning trees. The structure of spanning trees is generated uniformly at random among all spanning trees of the original graph. These trees are learned independently to address the classification problem and Maximum Posterior Marginal estimation is performed on each individual tree. Classification labels are produced via voting strategy over the marginals obtained on the sampled spanning trees. The learning is computationally efficient because the inference on trees is exact and efficient. Our objective is to investigate the capability of approximation of the original loopy graph model with loopy belief propagation inference via learning a pool of randomly sampled acyclic graphs. We focus on the impact of memorizing the structure of sampled trees. We compare two approaches to create an ensemble of spanning trees, whose parameters are optimized during learning: (1) memorizing the structure of the sampled spanning trees used during learning and, (2) not storing the structure of the sampled spanning trees after learning and regenerating trees anew. Experiments are done on two image datasets consisting of synthetic and real-world images. These datasets were designed for the tasks of binary image denoising and man-made structure recognition.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Learning tree-structured approximations for conditional random fields\",\"authors\":\"A. Skurikhin\",\"doi\":\"10.1109/AIPR.2014.7041937\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Exact probabilistic inference is computationally intractable in general probabilistic graph-based models, such as Markov Random Fields and Conditional Random Fields (CRFs). We investigate spanning tree approximations for the discriminative CRF model. We decompose the original computationally intractable grid-structured CRF model containing many cycles into a set of tractable sub-models using a set of spanning trees. The structure of spanning trees is generated uniformly at random among all spanning trees of the original graph. These trees are learned independently to address the classification problem and Maximum Posterior Marginal estimation is performed on each individual tree. Classification labels are produced via voting strategy over the marginals obtained on the sampled spanning trees. The learning is computationally efficient because the inference on trees is exact and efficient. Our objective is to investigate the capability of approximation of the original loopy graph model with loopy belief propagation inference via learning a pool of randomly sampled acyclic graphs. We focus on the impact of memorizing the structure of sampled trees. We compare two approaches to create an ensemble of spanning trees, whose parameters are optimized during learning: (1) memorizing the structure of the sampled spanning trees used during learning and, (2) not storing the structure of the sampled spanning trees after learning and regenerating trees anew. Experiments are done on two image datasets consisting of synthetic and real-world images. These datasets were designed for the tasks of binary image denoising and man-made structure recognition.\",\"PeriodicalId\":210982,\"journal\":{\"name\":\"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2014.7041937\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2014.7041937","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning tree-structured approximations for conditional random fields
Exact probabilistic inference is computationally intractable in general probabilistic graph-based models, such as Markov Random Fields and Conditional Random Fields (CRFs). We investigate spanning tree approximations for the discriminative CRF model. We decompose the original computationally intractable grid-structured CRF model containing many cycles into a set of tractable sub-models using a set of spanning trees. The structure of spanning trees is generated uniformly at random among all spanning trees of the original graph. These trees are learned independently to address the classification problem and Maximum Posterior Marginal estimation is performed on each individual tree. Classification labels are produced via voting strategy over the marginals obtained on the sampled spanning trees. The learning is computationally efficient because the inference on trees is exact and efficient. Our objective is to investigate the capability of approximation of the original loopy graph model with loopy belief propagation inference via learning a pool of randomly sampled acyclic graphs. We focus on the impact of memorizing the structure of sampled trees. We compare two approaches to create an ensemble of spanning trees, whose parameters are optimized during learning: (1) memorizing the structure of the sampled spanning trees used during learning and, (2) not storing the structure of the sampled spanning trees after learning and regenerating trees anew. Experiments are done on two image datasets consisting of synthetic and real-world images. These datasets were designed for the tasks of binary image denoising and man-made structure recognition.