Artificial intelligence is widely used in financial services, yet adoption in venture capital (VC) remains limited due to opaque predictive models. This study examines how explanation format and contextual enrichment shape trust in AI-driven venture recommendations. We compare black-box machine learning predictions of startup success with SHAP visualizations and GPT-generated textual summaries, including enriched versions reflecting venture-specific success factors. Two groups participated: VC investors and financial advisors. Explanations significantly increased trust relative to the black-box baseline, with the strongest effects for feature-enriched formats. Technical orientation moderated format preferences, but only once contextual enrichment was present. The study makes three contributions. First, it shows that explanation enrichment is central to trust in VC decision-making. Second, it introduces GPT-generated SHAP summaries as a practical mechanism for explaining model outputs. Third, it empirically links explanation design to trust calibration and cognitive alignment, extending Trust Calibration Models and Cognitive Fit Theory to VC decisions.
扫码关注我们
求助内容:
应助结果提醒方式:
