Dictionary-based methods represent the most commonly used approach for quantifying the qualitative information from (central bank) communication. In this paper, we propose machine learning models that generates embeddings from words and documents. Embeddings are multidimensional numerical text representations that capture the underlying semantic relationships within text. Using a novel corpus of 22,000 documents from 128 central banks, we generate the first domain-specific embeddings for central bank communication that outperform dictionaries and existing embeddings on tasks such as predicting monetary policy shocks. We further demonstrate the efficacy of our embeddings by constructing an index that tracks the extent to which Federal Reserve communications align with an inflation-targeting stance. Our empirical results indicate that deviations from inflation-targeting language substantially affect market-based expectations and influence monetary policy decisions, significantly reducing the inflation response parameter in an estimated Taylor rule.
扫码关注我们
求助内容:
应助结果提醒方式:
