The EU has taken a landmark step in regulating AI through the EU Artificial Intelligence Act (EU AI Act, hereafter “Act”), establishing a comprehensive legal framework for AI across the EU. The Act introduces a risk-based classification system for AI technologies and imposes obligations on developers, deployers, and distributors of AI systems entering the EU market. For tech businesses, especially those innovating in AI, understanding the implications of this regulation is essential, not only for compliance but also for safeguarding intellectual property. Whilst different parts of the Act come into operation in stages until 1 August 2027, the first set of provisions prohibiting certain AI systems deemed to present an unacceptable risk came into force on 2 February 2025. In this article, we summarise the first set of provisions of the Act and look at how the Act interacts with, and differs from, patent law under the European Patent Convention (EPC).

Risk categorisation of AI systems under the EU AI Act

The Act is designed to ensure that AI systems used within the EU are safe, trustworthy, transparent, and respect fundamental rights. It applies to any provider or user of AI systems whose output affects individuals in the EU, regardless of where the company is based. The Act applies a risk-based approach so that regulatory obligations are matched to the potential harm posed by the AI system. The Act classifies AI systems into four risk categories:

  1. Unacceptable risk – These AI systems are banned outright e.g. social scoring systems that rate individuals based on behaviour or personal traits, biometric categorisation systems, and AI systems that manipulate human behaviour.
  2. High risk – These are AI systems that pose a serious risk to health, safety or fundamental rights of persons (e.g. in critical infrastructure, education, recruitment, medical devices). These systems are permitted but subject to strict compliance requirements, including conformity assessments, technical documentation, and CE marking.
  3. Limited risk – These are AI systems that pose less serious risks but still interface with users in a way that may affect transparency, fairness or trust. For example, chatbots, deepfake generation, or interactive AI systems. These systems must meet transparency obligations, such as informing users they are interacting with AI.
  4. Minimal risk – These systems face no specific obligations under the Act. The majority of current AI applications fall here e.g. basic spam filters and standard image-enhancement tools.

High-risk AI systems – obligations

Article 6 of the EU AI Act considers an AI system to be high-risk where both of the following conditions are fulfilled:

a) ”the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;”

b) ”the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.”

The majority of obligations fall on providers and developers of high-risk AI systems. Such obligations include establishing a risk management system and conducting data governance ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors. Providers must also draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance.

Although public disclosure of a high-risk AI system’s technical specifications is not compulsory, developers are often required to provide such details to regulators. For AI-driven innovations, this obligation can pose a risk of revealing proprietary information, making IP protection an important strategic concern.

The EU AI Act vs. the European Patent Convention (EPC)

It’s important to recognise that the EU AI Act and the EPC are distinct legal instruments serving different purposes. The Act is a regulatory framework focused on the safe deployment and use of AI systems within the EU. The EPC, administered by the European Patent Office (EPO), governs the grant of patents for inventions in all fields of technology, including AI.

While the Act imposes compliance obligations, it does not determine whether an AI system is patentable. Patentability under the EPC depends on whether the invention meets the criteria of novelty, inventive step, and industrial applicability. The EPO has made it clear that inventions involving AI may be patentable when the AI contributes to a technical effect and solves a technical problem.

For tech businesses, compliance with the Act is a market access issue whereas patenting under the EPC is a proprietary rights issue. The Act and the EPC sit side-by-side rather than one deriving from the other; the EPC’s patentability criteria remain unaffected by the Act.

This means that the classification of an AI system as “high-risk” under the Act does not automatically preclude it from being considered a patentable invention. For example, an AI system used in biometric identification or critical infrastructure management may be deemed high-risk under the Act, but if it demonstrates a technical effect (e.g. improved accuracy or efficiency), it may be patentable under the EPC.

Strategic considerations for tech businesses

The intersection of regulatory compliance and patent strategy introduces new challenges and considerations:

  • Early classification check: At the development stage it is important to determine whether an AI system may fall into the high-risk category of the EU AI Act (or another category). If so, plan for regulatory compliance obligations (data governance, transparency, monitoring).
  • Disclosure and documentation: Although the Act imposes documentation obligations for high-risk systems, those are separate from the disclosure obligations in patent applications, which require a sufficient description. It is useful to consider how regulatory documentation for the Act can complement or support a patent filing (e.g. data sets, model architectures, test results).
  • Timing: Patent filings should ideally precede or coincide with regulatory submissions to avoid unintended public disclosures. If disclosures occur before a patent application is filed, they may jeopardise novelty and patentability.
  • Regulatory compliance vs. patent strategy: If an AI system is high-risk, compliance with the Act may impose costs and timelines. From a business planning perspective both the regulatory and patenting aspects should be factored into the commercial strategy: e.g. market launch may depend on regulatory clearance, while patent filing should be timed to preserve novelty/inventive step.
  • Commercialisation: While a granted patent may provide exclusive rights, if the AI system cannot be placed on the EU market because of non-compliance with the Act, the business value may be impaired. Conversely, compliance with the Act does not give rise to patent rights, so it is important to ensure the invention is protected via the EPC if appropriate.

Conclusion

The EU AI Act marks a major milestone in the regulation of AI systems in the EU. For tech businesses, the key message is that regulatory compliance via the Act and the protection of inventions via the EPC are distinct but complementary. A system classified as high-risk under the Act does not become unpatentable per se under the EPC.

The dual regulatory and IP landscape means that businesses should plan holistically. Early strategic alignment of your AI-system development, patent-filing timeline and regulatory roadmap will give you the best chance to secure both market access and proprietary rights.

If you would like to discuss the implications of the EU AI Act, or have any queries on patent protection in the field of AI, please contact us at gje@gje.com.