Discriminatory bias in algorithmic systems is widely documented. How should the law respond? A broad consensus suggests approaching the issue principally through the lens of indirect discrimination, focusing on algorithmic systems' impact. In this article, we set out to challenge this analysis, arguing that while indirect discrimination law has an important role to play, a narrow focus on this regime in the context of machine learning algorithms is both normatively undesirable and legally flawed. We illustrate how certain forms of algorithmic bias in frequently deployed algorithms might constitute direct discrimination, and explore the ramifications-both in practical terms, and the broader challenges automated decision-making systems pose to the conceptual apparatus of anti-discrimination law.
Constitutions come under pressure during emergencies and, as is increasingly clear, during pandemics. Taking the legislative and post-legislative debates in Westminster and the Devolved Legislatures on the Coronavirus Act 2020 (CVA) as its focus, this paper explores the robustness of parliamentary accountability during the pandemic, and finds it lacking. It suggests that this is attributable not to the situation of emergency per se, but to (a) executive decisions that have limited Parliament's capacity to scrutinise; (b) MPs' failure to maximise the opportunities for scrutiny that did exist; and (c) the limited nature of Legislative Consent Motions (LCMs) as a mode of holding the central government to account. While at first glance the CVA appears to confirm the view that in emergencies law empowers the executive and reduces its accountability, rendering legal constraints near-futile, our analysis suggests that this ought to be understood as a product, to a significant extent, of constitutional actors' mindset vis-à-vis accountability.