Liberties has submitted its position to the European Commission regarding the public consultation on the definition of Artificial Intelligence (Article 3) and the list of prohibited practices (Article 5) that are banned in the EU. The delegated act should focus on inclusivity and closing the loopholes remaining in the text of the AI Act.
The European Union’s AI Act is a groundbreaking legislative framework aimed at regulating artificial intelligence in a way that aligns with fundamental rights, democracy, and the rule of law. However, the final text of the AI Act contains troubling elements, including the definition and in relation to the prohibitions. Liberties’ aim is to draw attention to issues that require special attention on behalf of the European Commission, when elaborating on the delegated acts to close loopholes and clarify overly broad rules. We believe that precise language and clear guidelines are essential to safeguarding proper enforcement and respecting fundamental rights in the field.
Ambiguous and Overly Broad Definition of AI Systems
The AI Act Article 3 (1) defines AI systems as
‘machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’.
While this definition of AI system aligns with standards set by international organizations like the OECD as a machine-based system that can, for a given set of human defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments, but the AI Act’s technical focus risks excluding systems that should fall under the scope of the law.
We believe that the Commission must address the following issues regarding the definition of an AI system:
- Autonomy Loopholes: The phrase “varying levels of autonomy” must be clarified to include all systems capable of influencing decisions or environments, regardless of how autonomous they are. Narrow interpretations could exempt systems that should be banned.
- Overemphasis on Technical Aspects: The definition heavily leans on technical parameters instead of putting greater emphasis on context and impact. The Commission should prioritize whether a system's outputs can harm fundamental rights.
- Broadening Inclusivity: To prevent loopholes, any AI system capable of generating outputs that could harm fundamental rights must fall within the law’s scope.
Strengthening Prohibited Practices
The public consultation focused on the prohibited practices defined under Article 5 of the AI Act. It prohibits AI applications deemed to pose unacceptable risks. However, the guidelines are needed for further clarification to avoid ineffective application due to its vague or subjective language:
1. Subliminal, Manipulative, and Deceptive Techniques
The prohibition against AI systems deploying subliminal or manipulative techniques hinges on terms like “materially distort,” “significant harm,” and “subliminal.” The Commission must:
- Define thresholds for “materially” and “significant harm” to ensure enforceability.
- Expand the definition of subliminal techniques to include any method that influences decision-making or opinions without awareness.
- Address the concept of “consciousness,” which can vary across disciplines, to ensure broad applicability.
2. Social Scoring
Social scoring systems are banned when they evaluate or classify individuals based on social behavior or personal characteristics, resulting in detrimental treatment. However, the vague language allows room for misuse. The guidelines should:
- Clarify which systems classify as social scoring.
- Clarify that “social behavior” includes diverse societal norms.
- Define proxy data, such as postal codes, which can indirectly enable discrimination.
- State that the limited duration of data collection (“over a certain period of time”) is irrelevant to whether a system qualifies as social scoring.
3. Predictive Policing
The prohibition on using AI for crime risk prediction based solely on profiling or personality traits includes exceptions for systems supporting human assessment based on “objective and verifiable facts.” To close loopholes:
- The prohibition explicitly excludes “AI systems used to support human assessment based on objective and verifiable facts directly linked to a criminal activity.” “Objective and verifiable” should require independent review, such as judicial oversight.
- The meaning of “criminal offence” should be understood to include all behaviors that qualify as such under the laws of both the EU and the member states, maximizing the scope of prohibitions.
4. Untargeted Scraping of Data
AI systems that scrape facial images from the internet or CCTV footage to build recognition databases are prohibited when scraping is “untargeted.” The Commission should:
- Define “untargeted” to include any scraping not directly linked to individuals involved in criminal investigations.
- Ensure that this prohibition applies to systems based on use, not design intent.
- Align with EU case law (La Quadrature and others (C-511/18) emphasizing the importance of personal data protection, even when data is discarded post-analysis.
5. Biometric Categorization
Prohibited uses of biometric data include inferring sensitive categories of personal data. The Commission must clarify:
- That this prohibition applies even when such inferences are unintended but result from the system's design or data use.
- The distinction between legitimate uses of biometric data and exploitative categorization practices.
- Apply the definition of sensitive categories of personal data set out in the GDPR.
Conclusion
The success of the AI Act hinges on the clarity and enforceability of its provisions. Vague language in the definition and the prohibitions creates loopholes that undermine the law’s intent to protect fundamental rights and safeguard the rule of law.
The Commission has the opportunity to address these issues, ensuring that all AI systems operating in Europe align with its core values.
Liberties worked together with other civil society organizations, among others EDRi and ECNL, while outlining our opinion.