{"title":"流估计中模型复杂度控制","authors":"Zoran Duric, Fayin Li, H. Wechsler, V. Cherkassky","doi":"10.1109/ICCV.2003.1238445","DOIUrl":null,"url":null,"abstract":"This paper describes a novel application of statistical learning theory (SLT) to control model complexity in flow estimation. SLT provides analytical generalization bounds suitable for practical model selection from small and noisy data sets of image measurements (normal flow). The method addresses the aperture problem by using the penalized risk (ridge regression). We demonstrate an application of this method on both synthetic and real image sequences and use it for motion interpolation and extrapolation. Our experimental results show that our approach compares favorably against alternative model selection methods such as the Akaike's final prediction error, Schwartz's criterion, generalized cross-validation, and Shibata's model selector.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Controlling model complexity in flow estimation\",\"authors\":\"Zoran Duric, Fayin Li, H. Wechsler, V. Cherkassky\",\"doi\":\"10.1109/ICCV.2003.1238445\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes a novel application of statistical learning theory (SLT) to control model complexity in flow estimation. SLT provides analytical generalization bounds suitable for practical model selection from small and noisy data sets of image measurements (normal flow). The method addresses the aperture problem by using the penalized risk (ridge regression). We demonstrate an application of this method on both synthetic and real image sequences and use it for motion interpolation and extrapolation. Our experimental results show that our approach compares favorably against alternative model selection methods such as the Akaike's final prediction error, Schwartz's criterion, generalized cross-validation, and Shibata's model selector.\",\"PeriodicalId\":131580,\"journal\":{\"name\":\"Proceedings Ninth IEEE International Conference on Computer Vision\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Ninth IEEE International Conference on Computer Vision\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCV.2003.1238445\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Ninth IEEE International Conference on Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCV.2003.1238445","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper describes a novel application of statistical learning theory (SLT) to control model complexity in flow estimation. SLT provides analytical generalization bounds suitable for practical model selection from small and noisy data sets of image measurements (normal flow). The method addresses the aperture problem by using the penalized risk (ridge regression). We demonstrate an application of this method on both synthetic and real image sequences and use it for motion interpolation and extrapolation. Our experimental results show that our approach compares favorably against alternative model selection methods such as the Akaike's final prediction error, Schwartz's criterion, generalized cross-validation, and Shibata's model selector.