Artificial intelligence is reshaping how people invent. Large Language Models (LLMs) can now assist with brainstorming, drafting technical descriptions, and even proposing certain mechanisms or materials to use in an invention. Yet while these tools can accelerate creativity, they can also introduce risks when trying to secure intellectual property rights for the resulting invention. This article explores these risks, and also provides practical steps that can be taken to manage them.

Disclosure

Inputting details of an invention into a public or cloud-based AI tool prior to filing a patent application may constitute a public disclosure, especially if those details are then used to further train the model.

For example, an inventor may enter details about their novel vehicle braking system into a public LLM in order to find out which materials would be most suited for their invention. Later, a different user of the public LLM may query “what are the latest developments in vehicle braking technology?” In response, the public LLM may disclose the details of the inventor’s vehicle braking system, given that it has learned this information previously. In most jurisdictions, this publicly available disclosure would be citable against the inventor’s patent application for the braking system, which could prevent grant of the patent.

Even if the public LLM does not subsequently disclose the invention details to another user, those details are likely to be stored in remote servers owned and maintained by the organisation which developed the LLM. Depending on the organisation’s data protection policies, these details may be reviewed by system administrators (during a diagnostics or updating procedure), or may be sold on to other companies (for marketing and advertising purposes).

To reduce the risk of publicly disclosing invention details, it is best practice for inventors to refrain from sharing details with online AI systems. Where AI assistance is necessary (to speed up production ahead of a scheduled launch, for example), it would be safer to use a secure, offline model governed by strict data protection policies to ensure complete data isolation and prevent third-party access.

Accuracy

LLMs can generate confident yet incorrect statements, known as “hallucinations”. This is because LLMs predict plausible text patterns rather than evaluate scientific facts. As a result, the LLM might propose a chemical reaction that cannot occur, or a mechanical linkage that cannot provide the desired motion, for example. The inclusion of such hallucinations in a patent application can significantly impact the chances of securing a granted patent.

In order for a patent to be granted, the underlying patent application must provide a sufficient disclosure of the invention to be protected. This means that the patent application should provide enough information to enable a person skilled in the relevant technical field to reproduce the invention. For example, a patent application for the novel vehicle braking system should provide enough detail to enable an automotive engineer to build it. If the patent application contains unworkable examples, technical impossibilities, or contradictions arising from hallucinations, then the application may fail to meet this requirement. As a result, the patent may not be granted.

To manage this risk, all AI-generated technical material should be verified before use. The inventor should review any data, calculations, and/or implementation details proposed by an LLM to ensure that they are technically plausible. If the LLM’s suggestions fall outside of the inventor’s speciality, then additional human expertise should be sought (in confidence).

Freedom to operate

LLMs are trained on vast datasets that may include patented products. As a result, the LLM may suggest incorporating a patented product into the invention to achieve optimum results. For example, the LLM may suggest using a patented alloy in the brake shoes to improve the performance of the inventor’s novel braking system. Further, the inventor may not even realise the alloy is patented, given that LLMs rarely cite the sources of their output.

While this does not prevent the inventor from obtaining a granted patent for the braking system, if the inventor were to build and sell the novel braking system with the patented alloy, then the inventor would be infringing the patent for the alloy. This could lead to costly legal disputes with the owner of the alloy patent.

For this reason, AI-generated content should serve only as reference material. Inventors should conduct freedom-to-operate analyses on any components which have been suggested by an LLM prior to incorporating them in the final product.

Conclusion

LLMs should be treated as assistive tools, not substitutes for scientific or legal expertise. While AI can accelerate idea generation, relying on it blindly may expose inventors to various IP risks. However, inventors can safely benefit from using LLMs by ensuring that human expertise plays a central role in validating the accuracy and origin of the LLM’s outputs, as well as ensuring that any inputted data is not publicly disclosed.

If you would like to discuss how you can use AI safely as part of the inventive process, please contact your usual GJE attorney, or email us: gje@gje.com.