This paper studies the generation of inflation expectations using generative AI in survey experiments, examining diverse agents created with both proprietary and open-source large language models (LLMs). It shows that model architecture significantly impacts expectations, with proprietary models generally exhibiting less disagreement in their responses than open-source alternatives. Some LLMs predict higher inflation than actual rates, aligning with patterns observed in the Survey of Consumer Expectations. Information treatments, particularly forward guidance on inflation, influence LLMs’ inflation expectations, though with varying magnitudes across model types. Customizing prompts with demographic personas induces heterogeneous responses that mirror human survey behaviors, with some biases similar to those documented in household surveys. The paper also demonstrates how central banks could leverage these models as communication policy tools to test messaging strategies before implementation.
扫码关注我们
求助内容:
应助结果提醒方式:
