EU AI Act: the first global law on artificial intelligence is a reality

Related

Share

Artificial intelligence is rapidly revolutionizing every sector, from healthcare to finance, including marketing and public administration. But who oversees these systems? How can we ensure that AI does not violate fundamental rights?

It is from these questions that the EU AI Act originates, the first legislation in the world that regulates the development, distribution, and use of artificial intelligence according to a key principle: risk.

What does the EU AI Act foresee?

Approved in 2024, the European Regulation on Artificial Intelligence introduces a risk-based approach. AI systems are classified into four main categories:

🔹 AI at minimal risk
🔹 AI at high risk
🔹 AI prohibited
🔹 Models for general purposes (GPAI)

Prohibited Practices: AI That Will Have No Future

Some uses of artificial intelligence are considered ethically unacceptable or dangerous and will be banned starting from February 2, 2025.

Examples of prohibited AI:

  • Social score (style “Black Mirror”): evaluating people based on their behavior, with consequences in real life.
  • Psychological manipulation: systems that push users to make harmful or unaware decisions.
  • Scraping of faces from the Internet for biometric databases.
  • Recognition of emotions in schools and workplaces (except for health or safety reasons).
  • Real-time biometric surveillance by law enforcement, except in exceptional authorized cases.

High-risk AI: rules and requirements

High-risk systems are those that directly impact rights, safety, or access to essential services. They will have to comply with rigorous requirements.

Concrete examples:

  • AI used to hire candidates or evaluate performance.
  • Systems that determine credit or social benefits eligibility.
  • Algorithms in medical devices or critical infrastructures (energy, transportation).
  • AI tools that influence the judicial system or democratic elections.

The suppliers will have to:

  • Adopt continuous risk management systems
  • Ensure high quality of training and test data
  • Document every phase with technical transparency
  • Implement post-marketing monitoring plans

The users will have to:

  • Use the systems following the istruzioni specifiche
  • Preserve the log del sistema AI
  • Evaluate the impacts on fundamental rights in public or sensitive contexts

The rules will come into effect from August 2, 2026.

Rules for Generic AI Models (GPAI)

The regulation also introduces a specific framework for foundation models like Granite by IBM or LLaMA 3 by Meta, which can be used in numerous different applications.

Obligations of GPAI suppliers:

  • Respect for copyright UE
  • Public summary of the dataset di training
  • Automatic marking of generated content (deepfake, texts, images)

If a model exceeds the threshold of 10²⁵ FLOP during training, it is considered at systemic risk and must:

  • Report serious incidents
  • Adopt advanced cybersecurity measures

The rules apply to new models from August 2, 2025, with an adaptation period until 2027 for those already existing.

Who does the EU AI Act apply to?

The regulation applies to:

  • Suppliers (those who develop or market AI)
  • Users (those who use AI systems in their business)
  • Importers and distributors in the EU

Non-EU companies are also subject to the regulation if their systems or outputs are used in the European Union. In such cases, they must appoint an authorized representative in the EU.

Sanctions: up to 35 million euros

The fines are proportional to the severity of the violation:

Type of Violation Maximum Penalty
Use of prohibited AI €35 million or 7% of global turnover
Violation of requirements for high-risk AI €15 million or 3% of global turnover
Misleading information to authorities €7.5 million or 1% of global turnover

The PMI e startup benefit from a proportional sanctioning regime.

Key dates

 August 1, 2024: official entry into force
February 2, 2025: prohibition of prohibited AI practices
August 2, 2025: GPAI (new models)
August 2, 2026: high-risk AI
August 2, 2027: systems regulated by other EU regulations

FAQ – Frequently Asked Questions about the AI Act

▶ Does the AI Act also apply to the personal use of artificial intelligence?
No. Personal uses or for scientific research are exempt.

▶ Are open source systems regulated?
Yes, if they have a systemic impact or are used in high-risk contexts.

▶ What does “marcare i contenuti generati” mean?
Adding machine-readable metadata or labels that indicate the content (e.g., a deepfake video) has been generated or altered by an AI.

Conclusion 

The EU AI Act is much more than a law: it is a declaration of intent on the future of European digital innovation. Companies that develop, distribute, or use AI must act immediately to assess their risks, adapt their processes, and invest in transparency and governance.