The European Union’s Artificial Intelligence Act (EU AI Act) introduces a comprehensive regulatory framework for AI systems, with potentially significant implications for those looking to commercialise AI in the digital health sector. In this article, we summarise the relevant provisions of the act and discuss how these highlight the importance of IP protection for innovations in this space.

Introduction

The EU AI Act is a legal framework designed to enable a harmonised treatment of AI across EU member states, having a particular focus on categorising AI applications by risk level, and placing regulatory obligations on the providers of these systems according to the associated risk level. Different sections of the act come into operation in stages until 1 August 2027, and the first set of provisions prohibiting certain AI systems deemed to present an unacceptable risk came into force on 2 February 2025.

Article 3(1) of the EU AI Act defines an AI system as:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

This broad definition encompasses a wide range of software-based AI applications including core areas of digital health innovation.

High-risk AI systems: what counts?

Under the AI Act, different provisions apply depending on the deemed risk level of a particular AI system. A large proportion of healthcare related AI systems fall into the “high-risk” category according to Article 6 of the act. Under Article 6(1), a system is considered high risk if:

(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;

(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.

For digital health innovations, the most relevant legislation listed in Annex I of the AI Act includes:

  • Regulation (EU) 2017/745 on medical devices
  • Regulation (EU) 2017/746 on in vitro diagnostic (IVD) medical devices

Regulation 2017/745 on medical devices provides a broad definition of a medical device which explicitly covers software intended for medical purposes such as diagnosis, monitoring, treatment and several other applications. Under Rule 11 of this regulation, software that provides information which is used to take decisions with diagnostic or therapeutic purposes, and software that monitors physiological processes, are both classified under classes IIa, IIb, or III. The specific class depends on the severity of the health impact of decisions that the software supports. As discussed below, this classification affects how certain elements of the AI Act apply.

Beyond medical devices falling under the EU regulations above, Annex III of the AI Act identifies other potential high-risk systems. Relevant systems related to digital health include:

  • Emergency healthcare patient triage systems
  • AI systems for biometric categorisation and emotion recognition

Why does a high-risk classification matter?

According to the AI Act, high-risk AI systems must meet stringent documentation and transparency requirements:

  • Article 11 of the act sets out that detailed technical documentation relating to a high-risk AI system must be drawn up before the system is placed on the market and kept up to date to demonstrate compliance.
  • Article 13 high-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately.
  • Annex IV of the act says the technical documentation must include, among other things, a detailed description of the elements of the AI system and of the process for its development.

While public disclosure of the technical details of an AI system is not mandated, many developers must be prepared to share technical details of their innovations with regulators and third-party assessors. For example, AI systems which are categorised as class IIa, IIb or III medical devices, are IVD devices other than class A, or are AI systems which fall under Annex III of the act, require that a conformity assessment is undertaken by a third party. For many AI-based digital health innovations, this requirement creates a potential risk of exposing proprietary information, therefore making IP protection a strategic priority.

Why IP protection matters

For developers of many AI-driven digital health solutions, the EU AI Act increases the likelihood that they are required to disclose technical details of their innovation to third parties. Whilst confidentiality agreements can be used to mitigate this risk, for many digital health innovations, a patent application can be a strong alternative means of IP protection.

Whilst some AI innovators may have previously been hesitant to disclose the finer details of their inventions publicly, given the requirement under the AI Act to disclose technical details to third parties, a patent application may be rendered a more suitable form of IP protection. In exchange for making a detailed disclosure of the invention in a patent application, they may secure a powerful form of protection.

Although not all AI innovations are suitable for patent protection, medical and health-related tasks often are eligible. It is therefore advisable to seek advice from an IP professional as to whether patent protection is the right option to leverage the greatest value of the innovation.

If you would like to discuss the implications of the EU AI Act, or have any queries on patent protection in the field of AI or digital health, please email us: gje@gje.com.