February - 2025


BY: ANNE-SOLÈNE GAY

Artificial intelligence: the first rules already apply

The European regulation on artificial intelligence came into force on August 1, 2024, and is already partially applicable. It aims to regulate the use of artificial intelligence (AI) to protect people's rights and freedoms.

This new regulation affects organizations that develop AI systems and bring them to market (known as providers), and those that use them for business purposes (known as deployers).

The regulation adopts a risk-based approach, and any provider or deployer of an AI system must, therefore, determine the level of risk likely to result.

The regulation defines four levels of risk:

Risk level

Criteria

Obligations

Unacceptable risk

These are systems considered to be a clear threat to people's security, livelihoods, and rights.

Prohibition

High risk

These are systems likely to present serious risks to health, safety, or fundamental rights.

Obligations:

conformity assessment, technical documentation, risk management mechanisms, etc.

Limited risk

These are systems that require specific transparency measures.

Users need to be aware that they are interacting with a machine, so they can make an informed decision.

Minimal risk

These are systems that do not fall into any of the above categories.

No specific obligations

 

This regulation applies in several stages:

Dates

Measures

February 2, 2025

  • Application of general provisions (purpose, scope, and definitions)
  • Obligation for providers and deployers to ensure that AI is well-mastered within their organization
  • Ban on AI systems that present an unacceptable risk

August 2, 2025

  • Application of rules for general-purpose AI models
  • Appointment of competent authorities at the member states level

August 2, 2026

  • Entry into force of the other provisions of the AI regulation, excluding those coming into force on August 2, 2027
  • Setting up a regulatory sandbox by member state authorities

August 2, 2027

Application of rules relating to Annex I high-risk AI systems (toys, radio equipment, in vitro diagnostic medical devices, etc.)

 

The European Commission has published:

Failure to comply with the regulation exposes the offender to an administrative fine, the amount of which depends on the nature of the offense and may reach EUR 35,000,000, or 7% of the offender's total worldwide annual turnover for the preceding financial year.

In addition, regardless of their level of risk, AI systems are likely to undermine:

  • confidentiality of transmitted data,
  • protection of personal data and
  • intellectual property rights.

Therefore, both providers and deployers, as well as users of AI systems, need to be extremely vigilant.


Tags:
Juris Initiative,Anne-Solène Gay,Behring,Artificial Intelligence,AI Act- European regulation,new technologies,artificial intelligence systems,AI regulation