Introduction

The impact of Artificial Intelligence (AI) on intellectual property (IP) law undoubtedly ranks as one of the most-discussed topics of 2020 among legal academics and practitioners. Following initiatives at WIPO, the EPO and several national IPOs (including the UKIPO and the USPTO), EU institutions have now also become active in this area. On 20 October 2020, the European Parliament adopted a resolution on IP rights for the development of AI technologies. In parallel, on 25 November 2020, the European Commission published a commissioned study on challenges posed by AI to the European IP rights framework.

The study, which was carried out by researchers at the Institute for Information Law (IViR) [the authors of this post] and the Joint Institute for Innovation Policy (JIIP), examines the state of the art of copyright and patent protection in Europe for AI-assisted outputs in general and in three priority domains: science (in particular meteorology), media (journalism), and pharmaceutical research. The term “AI-assisted outputs” is used in the study to refer to productions or applications generated by or with the assistance of AI systems, tools or techniques. This post focuses on the patent law analysis of the study (for a overview of the study and of the copyrigth part, see here and here).

The use of AI systems in the realms of culture, innovation and science has grown spectacularly in recent years and should continue to do so. As noted by Drexl et al, AI applications relevant for patent law might include, for example, technology for “the functioning of a self-driving car, optimisation of a car design, development of medical treatments, virtual assistants”. Although AI systems have become increasingly sophisticated and autonomous, our study assumes that fully autonomous creation or invention by AI does not yet exist, nor will it exist for the foreseeable future. The study, therefore, views AI systems primarily as tools in the hands of human operators.

AI and European Patent Law

In respect of European patent law, our analysis focuses on the European Patent Convention (EPC), looking into a number of issues related to AI-assisted outputs: inventorship, ownership, novelty assessment, inventive step, sufficiency of disclosure, and the case study of drug discovery.

As our study demonstrates, the requirement that an inventor be named on a patent application means that one or several human inventors must be identified. Under the EPC regime, this is essentially a formal requirement. The EPO does not resolve disputes regarding substantive entitlement, which is an issue that is governed by national law. Following this approach, the EPO decided two cases in 2020 (currently under appeal) where it considered that, because AI systems do not have legal personality, they cannot be named inventors on a patent application.

A human inventor typically has the right to be named on the application. Beyond this, inventorship and co-ownership are mostly a matter for national law. It should be noted, however, that as AI technology stands today, the possibility that an AI system would invent in a way that is not causally related to one or more human inventors (e.g. the programmer, the trainer, the user, or a combination thereof) seems remote. As technology stands, no immediate action appears to be required on the issue of inventorship at EPC level.

As regards patent ownership, there are at least three possible (sets of) claimants to an AI-assisted invention: the programmer or developer of the AI system; the owner of the system; and the authorised user of the system (who provided it with training data or otherwise supervised its training). Neither international law nor the EPC provide clear rules on how ownership of patents may be affected by this new type of AI-assisted inventive activity. It is therefore a matter for national laws. However, that might not require harmonisation as there does not seem to be a problem in establishing a sufficient connection between an AI-assisted invention and a patent applicant.

The granting of a patent requires that, as of the date of filing, the invention must be new (novel) and involve an inventive step. While the increasing use of AI systems for inventive purposes does not require material changes to these core concepts, it may have practical consequences for patent offices. AI systems enable qualitatively or quantitively different novelty (prior art) searches, and the practical application of inventiveness may change as certain claimed inventions may be “obvious” to a person skilled in the art due to the increasing use of AI systems. Any future changes will likely emerge in legal decisions at European (EPO Boards of Appeal) or national levels where patents will either be upheld or not.

A patent application must also sufficiently disclose the invention. The “black box” nature of some AI systems may present challenges to this requirement. In that regard, it has been suggested that a mechanism to deposit AI algorithms be established, akin to that for microorganisms (the Budapest Treaty). Although it is as yet unclear that a deposit system for AI algorithms would be useful, it seems advisable to at least consider the possibility of requiring applicants to provide this type of information, while maintaining sufficient safeguards to protect all confidential information to the extent it is required under EU or international rules.

Finally, inventions that might otherwise be patentable might be protectable as trade secrets under the 2016 Trade Secrets Directive, a topic for future research that is outside the scope of our study.

Conclusions and Recommendations

In light of the above, our study reaches the following conclusions and recommendations regarding European Patent Law, and in particular the EPC:

  • The EPC is suitable to address the challenges posed by AI technologies in the context of AI-assisted inventions or outputs.
  • When assessing novelty, national IPOs and the EPO should consider investing in maintaining a level of AI capability that matches the technology available to sophisticated patent applicants.
  • When assessing inventive step, it may be advisable to update EPO Examination Guidelines to adjust the definition of the “person skilled in the art” and secondary indicia to track developments in AI-assisted inventions or outputs.
  • When assessing sufficiency of disclosure, it would be useful to study the feasibility and usefulness of a deposit system for AI algorithms and/or training data and models that would require applicants in appropriate cases to provide information that is relevant to meet this legal requirement.
  • For the remaining potential challenges identified, it may be good policy to wait for cases to emerge in particular before national courts to identify actual issues that require a regulatory response, if any.
  • Further study of the role of alternative IP regimes to protect AI-assisted outputs, such as trade secret protection, unfair competition and contract law, should be encouraged.

Final Remarks

In sum, the study concludes that the current state of the art in AI does not require or justify immediate substantive changes in patent law in Europe. The existing concepts of patent law are sufficiently abstract and flexible to meet the current challenges from AI. Producers of AI-assisted outputs also have access to alternative regimes, such as trade secret protection, unfair competition and contract law.

The main conclusions of the IViR/JIIP study were adopted by the European Commission in the IP Action Plan that was submitted to the European Parliament and the Council on the same day the study was published, 25 November 2020.

Part of this blog post is adapted from a previous blog post on the IPKat Blog (here) and the Kluwer Copyright Blog (here)


_____________________________

To make sure you do not miss out on regular updates from the Kluwer Patent Blog, please subscribe here.


Kluwer IP Law

The 2022 Future Ready Lawyer survey showed that 79% of lawyers think that the importance of legal technology will increase for next year. With Kluwer IP Law you can navigate the increasingly global practice of IP law with specialized, local and cross-border information and tools from every preferred location. Are you, as an IP professional, ready for the future?

Learn how Kluwer IP Law can support you.

Kluwer IP Law
This page as PDF

11 comments

  1. “the possibility of requiring applicants to provide this type of information, while maintaining sufficient safeguards to protect all confidential information to the extent it is required under EU or international rules.”

    Hmmm, I did not know there was a choice. As far as I understood, you disclose your invention and after 18 months it will be published. If you keep trade secrets and the disclosure is insufficient, you won’t get a valid patent.
    To me this means, the algorithm and the training data have to be disclosed, otherwise others can not recreate the invented subject. To me it sounds like AI proponents have a new piece of cakeism…

    1. Thanks for the comment. In the report itself (pages 111-114) we discuss disclosure in greater detail. On page 112, for example, we explain that the “black box” nature of certain AI systems may make it challenging to provide a sufficiently clear and complete disclosure for the invention to be carried out by a POSITA. This may be more so in the case of process claims, but attention should be drawn to any claimed invention, whether product or process, where an AI system, or one of its outputs, is a part of the claimed invention. An AI system may help identify an innovation (or even just a lead) but it may not be able to explain why it works, or how it is making its contribution. Disclosing whether an invention was conceived with the aid of AI may be required if necessary to meet patentability criteria. In other words, the principle remains unchanged: The patent application must disclose enough for replicability by a POSITA but not more than is the case for non-AI assisted inventions. The idea of requiring disclosure/deposit of algorithms, and possibly also of a description of the way in which the AI that assisted in the inventive process was trained, including a reference to the training data and its main characteristics, training technique and method used, may be required in appropriate cases. The report also discusses a possible “deposit” requirement for the algorithm and training data.

  2. In my opinion, the only real issue is sufficient disclosure. Maybe patent offices need to update their filing systems to allow upload of training data together with the usual filing of the patent application. The data could then be used with the disclosure in the application to implement the invention. If patent offices are clever, they charge the applicant a supplemental fee for training data per megabyte in order to create an incentive that training data sets are managable but sufficient.

  3. To support Fragender, I would like to draw the attention to T 161/18.

    https://www.epo.org/law-practice/case-law-appeals/recent/t180161du1.html

    The application relates to determining the cardiac output. In its catchword the BA said:
    The present invention, which is based on machine learning, in particular in connection with an artificial neural network, is not sufficiently disclosed, since the training of the artificial neural network according to the invention is not implementable for lack of disclosure. So this is for Art 83EPC.

    Now for Art 56: Since in the present case the claimed method differs from the prior art only by an artificial neural network, the training of which is not disclosed in detail, the use of the artificial neural network does not lead to a special technical effect which could establish inventive step.

    In a nutshell we are faced by one of the biggest problems of AI and neural networks: disclosing not only the algorithm, but also the training data.

    Whilst it is clear that AI can help out with repetitive actions, like for instance checking US, MRI or X-ray pictures in order to detect abnormal tissues, the training data will be very valuable. But once published they will most probably become worthless. I therefore do not expect a lot of patent applications is this new area of technology.

    As far as the creation of a right sui-generis for AI algorithms, we have the experience of the “Washington Treaty on Intellectual Property in Respect of Integrated Circuits” of 1989. A lot of efforts were put in at the time and this treaty has not yet come into force as only Bosnia and Herzegovina, Egypt and Saint Lucia have acceded or ratified…..

    Why do we need a separate deposit for AI? The algorithm as such is not worth a lot in the absence of the training data. And are the creators of AI applications ready to communicate all those data?

    As there is a lot of state money to get, AI has become a buzzword. It is a wonderful playground for legal scholars, as many different aspects are touched, but will it be the big thing of the future? We have seen the big data and biotech bubbles deflate quicker than they inflated, and I would not be surprised that the same happens with AI.

    Let’s legal scholars have their playground, but save us on messages which have little to do with real life.

    As far as the two applications before the EPO are concerned, should DABUS be accepted as applicant, they are probably not inventive for the one (food container) or not sufficiently disclosed for the other (devices and methods for attracting enhanced attention, i.e. a flashing light).

    1. Dear Attentive Observer,

      I concur with your comments. In an article I have published in epi information 4/2020 to signal the importance of decision T 161/18 , i quote AI experts who depict training data as “the lifeblood of AI” or even “the Achilles heel of AI”. With decision T 161/18, it can be concluded that training data are the Achilles heel of inventions using AI.
      It is of note that the Board of T 161/18 has raised the requirement of Article 83 of its own motion (it had not been raised by the Examining Division).
      The disclosure requirement of Article 83 regarding AI training data is Indeed a formidable challenge. I share your scepticism about déposits. This would be another source of complexity, costs and legal issues, and it could only capture datasets at the date of filing or priority while data sources are constantly updated. The idea I float in this article is to explore whether the disclosure of méthodologies for selecting data sources and processing the selected data so they are relevant inputs to the neural network for achieving the desired effect.

  4. Agreed. What I like about the current debate is that AI is serving to “stress test” the patentability provisions of the EPC, written in 1973 before there was any CII sector or biotech sector. Can the EPC see off the current furore about the patentability of AI? I think it can. Did those who wrote the EPC get anything much wrong? It seems not. Hooray!

    And what about the Established Case Law of the EPC? Not much wrong with that either, is there? So far, at least.

    1. Max Drei,
      I agree that AI can be viewed as a “stress test” for the EPC and the EPO but decision T 161/18 is just the kickoff. This decision sounds the alert for a problem, primarily scant attention of the examining divisions to sufficiency in AI cases, now there is a need for further insights from future BOA jurisprudence and the Guidelines to provide guidance to examiners and practitioners as to how deal with EPC requirements in ai cases.
      An interest of decision T 161/18 is its procedural law aspect i.e. the Board acting ex officio as it is entitled in ex parte proceedings.
      On a different issue, the statement in the post that “the requirement that an inventor be named on a patent application means that one or several human inventors must be identified. Under the EPC regime, this is essentially a formal requirement.” is debatable. Qualifying the requirement as “formal” risks playing down its policy significance. If an applicant could designate an IA as inventor it would allow them to deprive the human inventors from their rights with the pretence that the invention was made by an IA.

  5. It is interesting to note that the fundamentals of modern patent law have been defined well before the EPC was drafted.

    They go back to the Strasbourg Convention on the Unification of Certain Points of Substantive Law on Patents for Invention signed in 1963 by the members of the Council of Europe.

    The Strasbourg Convention has had a significant impact on the EPC, on national patent laws across Europe, on the Patent Cooperation Treaty (PCT), on the Patent Law Treaty (PLT) and on the WTO’s TRIPS. The notions of novelty and inventive step as we know them today have been defined then.

    https://www.coe.int/en/web/conventions/full-list/-conventions/rms/090000168006b65d

    Since then we have had biotech inventions, Internet, CII, you name it, but the basics stood and are still valid. AI is just an under-group of CII and should not be dealt with differently as any other invention.

    As far as training data are concerned one could imagine a standard presentation like it exists for gene sequences so that they can be searched. This implies that they are made public, but in view of their intrinsic value, I have some doubts about this.

    I can agree that “if the application discloses the methodologies for the selection of data sources and processing of data which are specifically adapted to enable the skilled person to prepare training data relevant to the objective” could be a possible way to avoid disclosing the training data as such, but this approach is, in my opinion, a bit to theoretical.

    In the case of CII it has never been required to file or disclose the source code for the simple reason that it is in general impossible to make head or tail of such a disclosure, and it is also impossible to carry out on this basis a meaningful search. This was the prime reason to exclude from patentability programs as such. That is why the inventive idea, even in case of CII, has to be given in plain open understandable language. Why should it be different in terms of training data for AI?

  6. Good contributions from Attentive and F. Hagel. The discussion of training data set me thinking about the development of the case law under Art 83 of the EPC on the issue whether “one way is enough”. Simple in theory, but quite hard to apply the principles to a real case with its own unique matrix of facts.

    Americans (and many UK lawyers) assert that you can’t have legal certainty without Binding Precedent. 30 years ago, I thought so too. But the last 30 years as a European Patent Attorney has brought me round to a different opinion. Today I think that the Darwinian evolution of the law within the Boards of Appeal of the EPO gives us, in evolutionary time, a better outcome, namely the survival of the fittest and most elegant working lines of legal logic.

    Somebody once pointed out that it takes more courage to change your mind than to hold fast to your established opinion. Three cheers for legal economy at the Enlarged Board of Appeal: decide the question that cannot be ducked but refrain from answering any more than necessary.

  7. Dear Max Drei,

    I can agree with you, but I must express some reservation about the recent evolution of the Enlarged Board of Appeal.

    I am far from convinced that in G 3/19 the EBA “decide[d] the question that cannot be ducked [and] refrain[ed] from answering any more than necessary”. Rewriting the questions and introducing a “dynamic interpretation” of its own case law in order to please the management of the EPO and the Administrative Council does not correspond to what one can expect from the EBA.

    In G 3/08, the EBA resisted the pressure of the then head of the EPO, but at the same time brought clarification in CII matters.

    Oral proceedings in G 1/19 took place in July 2020 and we are still waiting a decision. It will be interesting to see what could be the evolution of the EBA in CII matters. Let’s hope that the EBA has recovered its independence.

Comments are closed.