Despite the growing importance of machine learning in today's organisations, we know relatively little about how machine learning operates and how it influences calculative practices and cultures. Based on 695 hours of ethnographic fieldwork in the team of credit modellers from a large internet company in China, this study analyses the calculative culture that underpins the development of credit models. We show that credit scoring methodologies develop progressively into a self-referential set of calculative practices where substantive concerns about loan default are supplanted by more insular concerns around the seamless operation of the model. Insofar as the latter can only be measured by the model itself, this reduces the role of calculative experts to facilitators of machine learning rather than the purposeful interpreters of machine learning produced data. In this regard, credit scoring experts focus more on ensuring that models have a robust conversation with themselves rather than with managers or credit scoring agents. This matters because machine learning-driven credit scoring models end up privileging access to credit for those whose data trails more readily pass through data preparation filters rather than those who are less likely to default. We thus contribute to an understanding of how machine learning-driven calculative cultures both enact algorithmic bias and operate beyond the ken of purposeful human actors.