Ramon de Punder, Timo Dimitriadis, Rutger-Jan Lange
{"title":"基于库尔巴克-莱伯勒的分数驱动型更新特征","authors":"Ramon de Punder, Timo Dimitriadis, Rutger-Jan Lange","doi":"arxiv-2408.02391","DOIUrl":null,"url":null,"abstract":"Score-driven models have been applied in some 400 published articles over the\nlast decade. Much of this literature cites the optimality result in Blasques et\nal. (2015), which, roughly, states that sufficiently small score-driven updates\nare unique in locally reducing the Kullback-Leibler (KL) divergence relative to\nthe true density for every observation. This is at odds with other well-known\noptimality results; the Kalman filter, for example, is optimal in a mean\nsquared error sense, but may move in the wrong direction for atypical\nobservations. We show that score-driven filters are, similarly, not guaranteed\nto improve the localized KL divergence at every observation. The seemingly\nstronger result in Blasques et al. (2015) is due to their use of an improper\n(localized) scoring rule. Even as a guaranteed improvement for every\nobservation is unattainable, we prove that sufficiently small score-driven\nupdates are unique in reducing the KL divergence relative to the true density\nin expectation. This positive$-$albeit weaker$-$result justifies the continued\nuse of score-driven models and places their information-theoretic properties on\nsolid footing.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"90 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Kullback-Leibler-based characterizations of score-driven updates\",\"authors\":\"Ramon de Punder, Timo Dimitriadis, Rutger-Jan Lange\",\"doi\":\"arxiv-2408.02391\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Score-driven models have been applied in some 400 published articles over the\\nlast decade. Much of this literature cites the optimality result in Blasques et\\nal. (2015), which, roughly, states that sufficiently small score-driven updates\\nare unique in locally reducing the Kullback-Leibler (KL) divergence relative to\\nthe true density for every observation. This is at odds with other well-known\\noptimality results; the Kalman filter, for example, is optimal in a mean\\nsquared error sense, but may move in the wrong direction for atypical\\nobservations. We show that score-driven filters are, similarly, not guaranteed\\nto improve the localized KL divergence at every observation. The seemingly\\nstronger result in Blasques et al. (2015) is due to their use of an improper\\n(localized) scoring rule. Even as a guaranteed improvement for every\\nobservation is unattainable, we prove that sufficiently small score-driven\\nupdates are unique in reducing the KL divergence relative to the true density\\nin expectation. This positive$-$albeit weaker$-$result justifies the continued\\nuse of score-driven models and places their information-theoretic properties on\\nsolid footing.\",\"PeriodicalId\":501293,\"journal\":{\"name\":\"arXiv - ECON - Econometrics\",\"volume\":\"90 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - ECON - Econometrics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.02391\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - ECON - Econometrics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.02391","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Kullback-Leibler-based characterizations of score-driven updates
Score-driven models have been applied in some 400 published articles over the
last decade. Much of this literature cites the optimality result in Blasques et
al. (2015), which, roughly, states that sufficiently small score-driven updates
are unique in locally reducing the Kullback-Leibler (KL) divergence relative to
the true density for every observation. This is at odds with other well-known
optimality results; the Kalman filter, for example, is optimal in a mean
squared error sense, but may move in the wrong direction for atypical
observations. We show that score-driven filters are, similarly, not guaranteed
to improve the localized KL divergence at every observation. The seemingly
stronger result in Blasques et al. (2015) is due to their use of an improper
(localized) scoring rule. Even as a guaranteed improvement for every
observation is unattainable, we prove that sufficiently small score-driven
updates are unique in reducing the KL divergence relative to the true density
in expectation. This positive$-$albeit weaker$-$result justifies the continued
use of score-driven models and places their information-theoretic properties on
solid footing.