Key Points
- Policymakers, scholars, and commentators are increasingly concerned with the risks of using algorithms for profiling and automated decision-making.
- This article addresses how a Data Protection Impact Assessment (DPIA), applied as an algorithmic impact assessment (AIA), links the two faces of the General Data Protection Regulation (GDPR) approach to algorithmic accountability: individual rights and systemic governance.
- We propose that AIAs simultaneously provide systemic governance of algorithmic decision-making and serve as an important ‘suitable safeguard’ (Article 22) of individual rights.
- As a nexus between the GDPR’s two approaches to algorithmic accountability, DPIAs have a heretofore unexplored link to individual transparency rights.
- Our examination of DPIAs suggests that the current focus on the right to explanation is far too narrow. We call, instead, for data controllers to consciously use the mandatory DPIA process to produce what we call ‘multi-layered explanations’ of algorithmic systems.
- This concept of multi-layered explanations not only more accurately describes what the GDPR is attempting to do, but also normatively fills potential gaps between the GDPR’s two approaches to algorithmic accountability.