The current paradigm for detection of anti-drug antibodies (ADA) recommends a tiered strategy in which samples are tested in consecutive screening and confirmatory assays to ensure high sensitivity and specificity of detection. In each tier individual responses are compared against a statistically determined cut point to make positive/negative classifications and advance the sample to the next testing tier. This manuscript argues that the idea of cut point is scientifically flawed and not suitable for making positive/negative ADA classifications. Cut point set at the ≥ 95th percentile of the population responses does not reduce the number of false negatives; on the contrary, it reduces the ability to detect ADA in 95% of the population with lower responses. Likewise, ADA classification of individual study samples should not be predicated by responses of other individuals used to determine the assay cut points. Experimental conditions used during cut point determination often differ from those encountered during testing of study samples (e.g. drug-naïve vs treated subjects, different disease state before and after treatment) and therefore a cut point may not be suitable for testing of post-baseline samples. Since cut point cannot be trusted to make ADA classifications, it is proposed to discard its use together with tiered testing and instead base the detection of ADA on post-baseline signal changes and their relationship to pharmacokinetics, pharmacodynamics, efficacy and safety. Discarding both cut point and tiered strategy is expected not only to significantly reduce the workload dedicated by bioanalytical laboratories to immunogenicity testing but also to improve data analysis and interpretation.
扫码关注我们
求助内容:
应助结果提醒方式:
