With the increasing availability of large language models (LLMs), it is not surprising that parties in proceedings before the European Patent Office (EPO) would begin referring to answers from chatbots such as ChatGPT in support of their arguments. However, to what extent can AI‑generated content be used as “evidence” at the EPO?
This article looks at emerging themes from recent decisions of the Boards of Appeal to answer this question.
Chatbots do not represent the skilled person
The question of whether AI-generated content can provide evidence of the skilled person’s understanding was addressed in decisions T 0206/22 and T 1193/23.
One of the parties in each case referred to answers provided by ChatGPT as evidence of how certain claim features should be interpreted. However, the Boards dismissed ChatGPT’s replies as being irrelevant on the basis that interpretation is a matter for the “skilled person”. The Boards commented that a chatbot is not equivalent to a person skilled in the art, who is a specialist in a well‑defined field of technology.
Furthermore, the Boards identified what they saw as several critical flaws in the use of AI chatbots for this purpose:
- Does not reflect knowledge at the priority date: The information used by the chatbot to determine its interpretation is in principle based on all the information available to it, which may include documents published well after the priority date.
- Training data is unknown: The answer from a chatbot is based on the data it was trained on, which is unknown to the user.
- Sensitive to the prompt: The answer from a chatbot can depend on the context and precise formulation of the questions asked.
In view of these limitations, the Boards decided that it cannot be assumed that an answer from a chatbot correctly reflects the skilled person’s understanding at the priority date in the technical field of the patent in question.
AI-generated content may be wrong
In decision T 0535/21, the Board was presented with an extract from a dialogue with Microsoft Copilot, in particular its answer to the question “Does a microcontroller provide a higher level of security than a microprocessor or are they the same in terms of safety?“.
The Board noted that the statements made by the chatbot may well be correct, but the answer from the chatbot is not in itself suitable evidence of these statements, which would instead need to be verified through independent sources. In this regard, the Board was keen to point out that even the chatbot itself warned “AI‑generated statements […] can be wrong”.
Conclusion
While the Boards of Appeal have not ruled out the possibility of LLMs finding a place within proceedings before the EPO, it is apparent that the current limitations of AI systems present significant obstacles to their use for this purpose. In particular, the inherent black box nature of AI and the possibility of it providing incorrect information limits its use as credible source of evidence.
LLMs can be a very useful research tool, but practitioners before the EPO will still need to provide independent reference sources in support of their arguments, at least for now.
For further discussion or advice on the evidential value of AI-generated content at the EPO, please contact us at gje@gje.com