How the EU AI Act is Shaping the Future of Cybersecurity and AI Governance

The EU AI Act, officially known as the Artificial Intelligence Act, is a landmark piece of legislation that aims to create a comprehensive framework for the development and use of artificial intelligence in the European Union. The first of its kind in the world, the Act establishes a risk-based classification system for AI systems, which has significant implications for businesses and organisations, particularly in the area of cybersecurity.

Risk Classification

The EU AI Act classifies AI systems into four categories based on their risk levels:

  1. Unacceptable Risk: AI systems that pose a threat to security or fundamental rights are prohibited. This includes systems such as social scoring and manipulative AI that can influence behaviour in a harmful way.
  2. High Risk: The majority of the Act’s provisions focus on high-risk AI systems, which are subject to strict regulation. These include AI applications used in critical infrastructure, biometric identification and cybersecurity tools. Companies that develop or deploy high-risk AI systems must comply with strict requirements, including risk management, transparency, and accountability.
  3. Limited (Specific Transparency) Risk: AI systems that fall into this category, such as chatbots and deepfakes, are subject to lighter transparency obligations. Developers and deployers must ensure that end-users are aware that they interact with AI, which promotes transparency and trust.
  4. Minimal Risk: This category includes the majority of AI applications currently available on the EU single market, such as AI-enabled video games and spam filters. These systems are largely unregulated, although this may change with the development of generative AI technologies.

Obligations for Providers and Users

The Act places the majority of obligations on the providers (developers) of high-risk AI systems.  This includes companies that intend to place high-risk AI systems on the market or put them into service in the EU, regardless of whether they are based in the EU or a third country. Providers must ensure compliance with a number of requirements, including:

  • Technical Documentation: Providers must maintain comprehensive documentation that provides details on the design, functionality, and risk management measures of the AI system.
  • Risk Management: Continuous risk assessment and mitigation strategies must be implemented to ensure the safety and reliability of the AI system.
  • Cybersecurity Protections: Providers must ensure that their AI systems are secure from cyber threats and vulnerabilities.

Users (deployers) of high-risk AI systems also have obligations, although these are less extensive than those of providers. This applies to users located in the EU and third-country users where the AI system’s output is used in the EU. Users must ensure that they deploy AI systems in compliance with the regulations and maintain control over the systems they use.

General Purpose AI (GPAI)

The Act also covers General Purpose AI (GPAI), which refers to AI models that can be adapted for various applications. Providers of GPAI models must adhere to specific obligations, including:

  • Technical Documentation and Instructions for Use: All GPAI model providers must provide clear documentation and user instructions to ensure safe and effective deployment.
  • Copyright Compliance: Providers must comply with the Copyright Directive and publish a summary of the content used to train their models.
  • Systemic Risk Evaluation: GPAI model providers that present a systemic risk – whether open or closed – must conduct model evaluations, adversarial testing, track and report serious incidents, and ensure robust cybersecurity protections.

Free and open-license GPAI model providers have fewer obligations, which focus primarily on copyright compliance and the publication of training data summaries, unless they present a systemic risk.

Conclusion

The EU AI Act represents a significant step towards the establishment of a regulatory framework for artificial intelligence, particularly in the context of cybersecurity. Through the classification of AI systems based on risk and the imposition of obligations on both providers and users, the Act aims to ensure that AI technologies are developed and deployed responsibly and ethically. Organisations need to assess their AI systems and ensure compliance with the new regulations, in order to build trust and maintain a competitive edge as the AI governance landscape transforms.

Scroll to Top
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.