This chapter positions the relevance of human rights law to risks associated with AI, AI systems and algorithmic decision-making. The discussion is informed by EU developments on protecting against the harmful effects of AI systems, under the AI Act and the Digital Services Act, as well as the Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law. The analysis addresses states’ human rights obligations within the lifecycle of AI systems by focusing on the challenges of algorithmic opacity and states’ responsibility to regulate AI systems via impact assessments. The discussion moves on to highlight how the regulatory framework evolves regarding non-state actors’ human rights duties. Business corporations increasingly find themselves being scrutinised by domestic courts in connection to human rights issues. The obligations, under the EU AI Act, to conduct a fundamental rights’ impact assessment for high-risk AI systems and, under the Digital Services Act, to conduct a risk assessment for systemic risks, which includes actual or foreseeable negative effects for the exercise of human rights, reposition the relevance of human rights in designing and deploying AI. The last part of the chapter engages with the incompatibility of certain AI systems with human rights law. The chapter concludes by reinforcing the value of human rights law to AI while interrogating its capacity to capture all novel algorithmic harms.
Friday, January 30, 2026
Rachovitsa: AI and Human Rights
Mando Rachovitsa (Univ. of Nottingham - Law) has published AI and Human Rights (in Artificial Intelligence: Law and Regulation, Charles Kerrigan ed., 2nd ed., Edward Elgar Publishing 2025). Here’s the abstract:
