In an open letter sent to EU legislators, over 60 human rights organizations argue that, as artificial intelligence becomes increasingly deployed by both the private and public sectors, the rule of law requires the EU to adopt robust safeguards to protect the very foundation our Union stands on. The misuse of AI systems, including opaque and unaccountable deployment of AI systems by public authorities, poses a serious threat to the rule of law and democracy.
The open letter was drafted and coordinated by the Civil Liberties Union for Europe (Liberties), the European Civic Forum (ECF), and the European Center for Not-for-Profit Law (ECNL) and was signed by more than 60 organizations, including Amnesty International, Access Now and EDRi.
Lawmakers must uphold EU’s fundamental values
The European Union is in the final stages of negotiations on the Artificial Intelligence Act (“AI Act”). It is incumbent on EU legislators to pass laws that uphold the bloc’s values, including the rule of law, which is enshrined in Article 2 of the Treaty on European Union. Rule of law requirements mean, among others: transparent, accountable, democratic, and pluralistic law-making process; legal certainty; prohibition of arbitrariness of the executive powers; effective and equal judicial protection, including access to justice, by independent and impartial courts; separation of powers; and non-discrimination and equality before the law.
The misuse of AI systems could threaten these basic democratic norms. Regulatory loopholes, such as national security and law enforcement exemptions, could be exploited to weaken democratic institutions and processes and the rule of law. The AI Act needs to create a robust, secure regulatory environment grounded in the protection of fundamental rights and the rule of law.
An essential element to achieving this is the mandatory inclusion of fundamental rights impact assessments (FRIAs) in the AI Act, as it appears in the European Parliament’s draft. But AI FRIAs should have an added layer, considering not only potential harms to fundamental rights, but also the impact an AI system could have on the rule of law. These FRIAs should be an obligation for all high-risk AI technologies to ensure that they are deployed in a way that upholds the principles of justice, accountability, and fairness. These assessments provide a structured framework to assess and avoid potential fundamental rights violations, ensuring that AI technologies respect, promote, and protect our rights.
Loopholes and blanket exemptions must go
More than 60 civil society organizations have signed this open letter because there are serious doubts that the AI Act will properly safeguard the rule of law – and thus our free and democratic societies. A loophole has been inserted into drafts of the act that gives companies and public authorities alike the power to unilaterally decide that their AI system should be exempted from the law’s requirements, even though it is intended to be used in a high-risk areas, such as law enforcement, justice, elections, or essential public services.
If a provider chooses to exempt themselves, then all consequent obligations for deployers of such systems will similarly no longer apply. As a result, public authorities who then deploy such high-risk systems would arbitrarily escape all AI Act obligations to safeguard people affected by abusive AI deployment.
Moreover, there is no assurance from legislators that the AI Act will not grant a blanket exemption for AI used for national security purposes. National security exemptions from AI Act requirements should be assessed on a case-by-case basis, in line with the EU Charter of Fundamental Rights and existing EU law. We have already witnessed the covert use of invasive surveillance software – Pegasus – by some EU member states to spy on journalists, civil society and opposition politicians.
The more than 60 fundamental rights organizations who have signed this open letter urge EU lawmakers to make three critical changes to safeguard fundamental rights and the rule of law:
- Mandate FRIAs for all high-risk AI systems, in line with the amendments proposed by the European Parliament in Article 29a, and include rules to ensure FRIAs are conducted in an open and transparent manner and their findings subject to public scrutiny.
- Reject the European Council’s proposed amendment to Article 2 of the AI Act, which aims to exclude AI systems developed or used for national security purposes from the scope of the Act.
- Return to the original Commission proposal’s version of Article 6(2) of the AI Act, thereby removing newly added loopholes which would give AI developers, be it from the public or the private sector, the power to unilaterally exempt themselves from the safeguards set out in the AI Act.
As AI reshapes so many areas of society, from education and healthcare to governance and the administration of justice, EU lawmakers must not miss this opportunity to fully safeguard the rule of law and our Union’s most cherished rights and values.
You can read the full text of our open letter here.