The objective of this paper is to break down the widely presumed dichotomy, especially in law, between fully automated decisions and human decisions from a psychological and normative perspective. This is particularly relevant as human oversight is seen as an effective means of quality control, including in the current AI Act. The phenomenon of automation bias argues against this assumption. We have investigated this phenomenon of automation bias, as a behavioral effect of and its implications in normative institutional decision-making situations. The phenomenon of automation bias, whereby individuals overly rely on machine-generated decisions or proposals, has far-reaching implications. Excessive reliance may result in a failure to meaningfully engage with the decision at hand, resulting in an inability to detect automation failures, and an overall deterioration in decision quality, potentially up to a net-negative impact of the decision support system. As legal systems emphasize the role of human decisions in ensuring fairness and quality, this paper critically examines the inadequacies of current EU and national legal frameworks in addressing the risks of automation bias. Contributing a novel perspective, this article integrates psychological, technical, and normative elements to analyze automation bias and its legal implications. Anchoring human decisions within legal principles, it navigates the intersections between AI and human-machine interactions from a normative point of view. An exploration of the AI Act sheds light on potential avenues for improvement. In conclusion, our paper proposes four steps aimed at effectively countering the potential perils posed by automation bias. By linking psychological insights, legal analysis, and technical implications, this paper advocates a holistic approach to evolving legal frameworks in an increasingly automated world.