Fire testing enables an individual or an organisation to make a claim about how a material, product, or system will perform in operational use. This paper describes and analyses the various reaction-to-fire tests that have used over the last 100 years in the UK. By analysing the commonalities and differences between these tests we propose a ‘taxonomy of testing’. We suggest that tests may be classified by the degree to which users may unthinkingly apply the results—without leading to negative fire safety outcomes. We propose three categories: unrepresentative tests; model tests; and technological proof tests. Unrepresentative tests are those which do not mimic building fire scenarios, but have thresholds so conservative that users need not consider whether the test was applicable to their intended application. Model tests are those based on ‘models’ of expected fire scenarios—users must therefore be confident that the model is sufficiently similar to their application. Technological proof tests are those which provide a more realistic test of a real building system—users must carefully analyse the similarities between their test and the real building before applying the results. From this we conclude that where user competence is low, policymakers should cite only unrepresentative (and conservative tests) within their guidance. Conversely where user competence is high, policy makers may more safety cite model or technological proof tests. The kinds of tests that may be safely cited in guidance are therefore indelibly linked to the expertise of the user.