The EU AI Act: Responsibilities for Providers and Users

As the EU AI Act continues to shape the landscape of artificial intelligence regulation, it’s essential to delve deeper into aspects that were not yet covered in the first article dedicated to it. This part will clarify the obligations of providers and users and the implications for General Purpose AI (GPAI).

1. Providers vs. Users – Who Bears the Burden?

Understanding the distinction between providers (developers) and users (deployers) of AI systems is crucial for compliance with the EU AI Act. I would like to point out at the very beginning that according to Article 3 these terms do not only mean people who are directly involved in the development or use of AI. In this document, they include individuals as well as legal entities, agencies and other bodies.

  • Providers (Developers):
    • Providers of high-risk AI systems bear the majority of obligations. This includes ensuring compliance with technical documentation (Article 11), quality management (Article 17), user guide (Article 13), risk management, cybersecurity measures and others.
    • Providers must also be aware that these obligations apply not only to EU-based companies but also to those outside the EU if their AI systems are used within the EU market.

All these obligations are listed in Article 16.

  • Users (Deployers):

This Act has not forgotten the obligations of users as well. There is even a separate article listing them (Article 26).

  • Users of high-risk AI systems have fewer obligations compared to providers. However, they still need to ensure that the AI systems they deploy are compliant and maintain oversight.
  • Users should be proactive in understanding the capabilities and limitations of the AI systems they use, as they play a critical role in ensuring the ethical and responsible use of AI.

There are obligations for both parties regarding transparency. Article 50 of the EU AI Act establishes transparency obligations for providers and deployers of AI systems, requiring clear disclosure when users interact with AI, encounter synthetic content (including deep fakes), or are subjected to emotional or biometric analysis. These rules aim to ensure informed user awareness, with limited exceptions, and require technical measures to detect and label artificially generated or manipulated content.

2. General-purpose AI Model (GPAI)

A whole Chapter V has been devoted to the general-purpose AI model(GPAI). The rise of general-purpose AI presents unique challenges and opportunities under the EU AI Act. GPAI models can be adapted for different applications, making them versatile but also complex to regulate. 

  • Obligations for GPAI Providers:
    • All providers of GPAI models must provide technical documentation and instructions for use to ensure that users can use these models safely and effectively.
    • Compliance with the Copyright Directive is essential, and providers must publish a summary of the content used to train their models.
    • For GPAI models that pose a systemic risk, additional obligations apply, including conducting evaluations, adversarial testing, and ensuring robust cybersecurity protections.
  • Free and Open-Source Licensed GPAI:
    • Providers of free and open-source licensed GPAI models have fewer obligations, which primarily focus on copyright compliance and publication of training data summaries. However, if these models present systemic risks, they must adhere to the same rigorous standards as closed models.

Conclusion

The EU AI Act is a comprehensive framework that addresses different aspects of AI regulation, including the obligations of providers and users. By understanding the responsibilities of different stakeholders and the implications for AI in general, organisations can better navigate the constantly shifting landscape of AI governance.

As the regulatory environment continues to develop, organisations must remain vigilant and proactive in their assessment of AI systems. Compliance with EU AI Act not only mitigates risk, but also fosters trust and positions an organisation as a leader in the ethical deployment of AI.

Scroll to Top
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.