Skip to main content
July 17th, 2024

After years of negotiation, the final text of the AI Act was finally published in the Official Journal of the European Union on July 12. While the AI Act purpose is to promote trustworthy AI systems that respect European values and fundamental rights, it has not been free of criticism. Some stakeholders consider that it falls short regarding the protection of individual rights, though others claim that it does not provide sufficient incentives for innovation.

Key points for organizations:

What is an AI system?
The AI Act recognizes a broad concept of AI system, including any machine-based system characterized for its autonomy, adaptativeness, and capacity to generate predictions, recommendations, decisions or other types of content.

To whom does the AI Act apply?
The AI Act will apply to persons or companies that develop, import, put in the market or use AI systems (or their output) in the European Union, regardless of whether said persons or companies are located in or outside of the EU. Nevertheless, the extent of their obligations will depend on their role in the value chain, with developers (“providers”) being subject to more stringent requirements. Besides potential civil liability and reputational damage, companies risk considerable fines (up to 35 million EUR or 7% of the annual turnover) in case of violation of its provisions.

What are the obligations imposed by the AI Act?

The AI Act follows a risk-based approach, establishing diverse requirements depending on the category where the system falls:

  • Prohibited AI uses, such as subliminal, purposefully manipulative or deceptive techniques, social scoring, emotion recognition in the workplace or biometric categorization to infer sensitive characteristics of individuals.
  • High-risk AI systems: systems that are products or used as safety components of a product required to undergo a third-party conformity assessment to be placed in the EU market, such as medical and in vitro medical devices. Additionally, certain AI systems used in specific contexts, such as HR, education, emergency services or law enforcement, as listed in Annex III.
  • Limited risk AI systems: systems that, not being included in the previous categories, interact directly with humans, manipulate image, video or sound, or generate synthetic content.
  • Minimal risk AI systems: Any other AI system not mentioned in the previous categories. These systems are not subjected to any obligations by the AI Act.

High-risk systems must undertake a conformity assessment and comply with numerous requirements to ensure data quality and data governance, technical robustness and safety, transparency, non-discrimination and human oversight. In some cases, a fundamental rights impact assessment will have to be performed. By contrast, limited risk AI systems are are mainly subject to transparency obligations. Additionally, the regulation includes specific provisions for General Purpose AI models, which may include generative AI systems such as Chat GPT.

When will the provisions of the AI Act entry into force?

The AI Act will enter into force 20 days after its publication in the Official Journal of the European Union (August 2nd, 2024). From this date, the different provisions will gradually become effective during the upcoming months.

What are the next steps?

Companies should not wait long to start their compliance efforts. While the concrete operacionalization steps regarding some of the obligations are not yet clear, others can already be implemented. Furthermore, some requirements (for instance, those related to documentation or appropriate data management) should already be put in place, as it will not be possible to comply with them retroactively.

Watch our Linkedin Live of May 2024

In this replay video, we are giving you a brief overview of the AI Act (just before its official publication) and the key points that organizations must keep in mind, with a focus on the life sciences sector. This video has been published before the official publication of the AI Act, but the content is still accurate. We are presenting several use cases to illustrate the different requirements imposed by the AI Act depending on the type of AI system and the role that the organization in the value chain.
Watch this video to understand how to operationalize these requirements in practice and to find our which steps must be taken in order to ensure a responsible and compliant use of AI within an organization.

Noelia Fernandez Freire

Data Protection Lawyer

If you want to know more about the AI Act and how it will affect your organization, contact us directly

Contact us