
Artificial intelligence (AI) has taken the business landscape by storm. If AI’s ability to infer, adapt and act autonomously has already shaken up corporate operations as we know them, how might it shape — and threaten — the future? The EU AI Act risk categories attempt to answer that question.
AI’s aptitude for self-learning and borderless nature challenge governments, businesses and regulators to contain it. In March 2024, the European Union agreed to the first-of-its-kind act to regulate different AI systems according to their risks. Understanding the risk classifications and who they apply to is now a mandate for any company that wants to operate compliantly in the EU, including:
At its core, the EU AI Act is a risk-based approach to categorise AI systems based on the harm they may cause. It passed in early 2024 and came into force in August 2024. Covered organisations must comply by February 2025, making the fourth quarter of 2024 a high-stakes quarter for those who provide and deploy AI.
This applies to many different entities but is of particular concern for:
For a succinct overview of the EU AI Act, download our cheat sheet.
The legislation classifies AI systems into four risk levels based on their intended purpose. Understanding what each level entails and which systems it applies to is essential to compliance:
Tools like ChatGPT are among the best-known AI tools on the market. Which risk category applies to these systems, considered general-purpose AI (GPAI)?
The Act recognises that GPAI operates more broadly and isn’t designed for the specific purposes outlined in each risk category. This makes them applicable to various use cases, but it also makes them hard to classify.
Instead of pinning them to a risk category, the Act emphasises that GPAI carries systemic risk, meaning that there are far-reaching risks inherent in the use of these tools. The Act mandates transparency requirements, risk management, reporting and surveillance related to GPAI. It is worth noting that GPAI providers could be subject to some of the same obligations as high-risk systems if their models are used in high-risk applications.
High-risk is the most heavily regulated of the EU AI Act risk categories. As such, providers and deployers specifically handling high-risks systems must understand and comply with the requirements of the Act, specifically those detailed in Chapter III, Section 2.
Download our guide for a deeper dive into EU AI Act compliance.
Deployers’ obligations under the Act are tied to the instructions of providers. The provider of an AI system must equip deployers with comprehensive instructions for using the system safely and responsibly. Deployers must adopt appropriate measures following those instructions — a burden general counsel can help navigate.
Any deployer that deviates from the providers’ instructions with a new use case or a change to the system can then be classified as a provider, which comes with steeper regulatory requirements.
Under the Act, deployers are responsible for:
While using AI systems, deployers must still comply with existing member state laws, like GDPR. This includes completing a GDPR data protection impact assessment (DPIA), recognising that AI can involve automated decision making, and high volumes of personal data.
Organisations have a relatively short runway to comply with the EU AI Act. However, the short timeline only underscores the importance of taking a proactive approach to protecting human health, safety and fundamental rights that keeps up with AI’s rapid growth.
However, complying with the EU AI Act quickly doesn’t mean cutting corners. Deployers, providers and other entities operating in the EU can both stay ahead and stay compliant — if they have the right toolkit.
"Perhaps the greatest business risk around AI right now is the risk of doing nothing. You can decide how you'd like to approach it, but what I think every company needs to do is have a considered approach..." — Dale Waterman, Principle Solution Designer, Diligent
Tap into Diligent’s toolkit curated specifically for the EU AI Act to ease your compliance burden now and for the future.
Effectively stay ahead of AI regulations with the Diligent One Platform Education & Templates Library. Upgrade your skills with expert-researched, tailored learning tracks in one easy-to-navigate hub.