The greatest challenge for IP in the area of artificial intelligence (AI) will be to achieve some level of harmonisation worldwide. IP expert Adam Liberman has said this in the second part of a series of interviews with authors and editors of Kluwer IP Law. Liberman is director IP advantage at Deloitte and adjunct professor at the University of New South Wales. He has over forty years’ experience in advising on intellectual property, licensing, commercial and corporate matters.
To begin with could you briefly explain what is “artificial intelligence” or AI?
Firstly, one has to understand that the topic of “artificial intelligence” is an extremely complex one. That complexity brings with it numerous views on how “artificial intelligence” is or should be defined or whether it should be defined at all. Some examples illustrate the point:
“an algorithm or machine capable of completing tasks that would otherwise require cognition” (Abbott)
“a machine that behaves in ways that would be called intelligent if a human was so behaving” (McCarthy)
“AI can be understood as computer functionality that mimics cognitive functions associated with the human mind (eg the ability to learn)” (Response from various parties including IBM in the USPTO, Public Views on Artificial Intelligence and Intellectual Property Policy October 2020)
“AI is a collection of technologies that combine data, algorithms and computing power” (European Commission White Paper on Artificial Intelligence, Brussels 19.2.2020)
“Undue effort should not be expended on defining AI, which is dynamic and will be subject to fundamental change in the coming years” (Response from various parties including Ericcson in the USPTO, Public Views on Artificial Intelligence and Intellectual Property Policy October 2020)
Given that exploring the definition of AI may not be of practical assistance in discussing AI in the context of patents, it is easier to get to the heart of the problems that arise in the patent context to consider whether AI assists in generating an invention, and as such is a tool in the inventive process or whether AI can autonomously generate an invention and as such can be viewed as the inventor.
What are the implications of AI being used as a tool to develop an invention?
The requirement for there to be an “inventor” is key to most patent legislation. “Inventorship” requires that the relevant party contribute to the “conception” of an invention. Where AI is only used as a tool to develop an invention, then AI could not be considered to be an “inventor”. In that context, the activities of the natural person who used the AI tool would need to be considered to determine the question of “inventorship”. Examples of AI being used as a tool include where AI functions as a calculator, or retriever or analyser of data.
What are the implications of AI autonomously generating an invention?
Problems do arise where AI autonomously generates an invention -ie where it allegedly conceives or contributes to the conception of an invention. The DABUS/Thaler cases mentioned below consider the problem head on – ie can AI under relevant current patent legislation be considered to be an “inventor”? Before considering those cases, it is important to note that there is no universal acceptance of the view that the current state of AI can conceive or can contribute to the conception of an invention.
Could you briefly outline the nature of the DABUS/Thaler cases?
As a test case, Stephen Thaler lodged various patent applications identifying DABUS AI (a system he built) as the inventor of the relevant invention. The applications resulted in rulings by the UKIPO, the UK High Court in Thaler v the Comptroller of Patents et al, now on appeal, the EPO, now on appeal to the EPO Legal Board of Appeal, the USPTO, now on appeal to the US District Court for the Eastern District of Virginia, IP Australia, with an appeal to the Australian Federal Court. The essence of all the first instance decisions was that the relevant current legislation did not allow for a non human inventor. Had Thaler identified himself as the inventor rather than DABUS, the applications would have proceeded in the normal manner and probably to grant. The DABUS/Thaler case has given rise to speculation that there have been numerous patents granted where AI autonomously generated inventions have been developed, but for prudence sake a human has been identified as the inventor rather than the relevant AI. That approach could prove problematic if inventorship is later challenged.
What patent issues apart from “inventorship” need to be considered in the context of AI autonomously generated inventions?
Without being exhaustive, some of the issues to consider include the following: (a) Who is to be the owner of any patent? Amongst the options are, the AI system itself; the owner of the AI system; the developer of the AI system; the user of the AI system; the supplier of data to the AI system; the financier or investor who funded the creation of the AI system; the person by whom the arrangements necessary for the conception or creation of the invention are undertaken.
Assuming that policy considerations will mitigate against the AI system itself being the owner, it is not unlikely that a combination of the above parties may have contributed to the conception or creation of the invention and that there could be multiple such parties who contribute in varying degrees. Ultimately settling on who should have the best entitlement will need to be a pragmatic policy decision but one which ideally is harmonised internationally.
Ryan Abbott, a well-known commentator and author of The Reasonable Robot: Artificial Intelligence and the Law, identifies the AI owner as the preferred default party so as to minimise transaction costs. (b) Will the reference point for inventiveness or obviousness need to be changed? That is, will there be a need to change the reference point from “a person skilled in the art” to “a machine trained in the art”. Where AI is merely used as a tool, the predominant view expressed in the UK government published report in its response to a consultation on AI and IP, was that it was not necessary to change the reference point as AI would be part of the skilled persons tool kit, but where independent AI invention arose, that would be one of the many issues that needed to be reconsidered.
On 23 March 2021, the UK government published its response to a consultation on AI and IP. What was the most interesting aspect of that report to you?
Firstly, the diversity of views as to the developmental stage of AI by the various respondents to the consultation – the predominant view appeared to be that AI had not advanced beyond the “tool” stage and that inventive AI was a long way off. Secondly, the fact that there was a clear recognition that the impact of AI extended beyond patents, to copyright, designs and incidentally to trade secrets. One of the IP subject areas however that I think they missed out on was trademarks. Thirdly, how seriously the UK government appears to view this evolving area – eg “ …we will build on the suggestions made…and consult later this year on a range of policy options, including legislative change for protecting AI generated inventions which would otherwise not meet inventorship criteria”.
The abovementioned European Commission White Paper considered the challenges posed by AI (see also this article on the Kluwer Patent Blog). The USPTO published its own report last year. Can you make a comparison between the two reports?
It is difficult to make a comparison between these two reports, largely because we are not comparing like with like. The European Commission White Paper takes a holistic view on the impact of AI on the socio economic position of Europe, with no reference to IP, whereas the USPTO report is clearly IP centric, more akin to the abovementioned UK report, but without containing a government response. Usefully, the USPTO report takes a deeper dive into trademark, trade secret and database issues than the UK report. Importantly the USPTO position is that there still is a lot to be learned about how best to understand and deal with AI in the IP context.
What is the position in Australia regarding AI?
The Australian Government has initiated a consultation process via a discussion paper – “An AI Action Plan for all Australians” https://consult.industry.gov.au/digital-economy/ai-action-plan/supporting_documents/AIDiscussionPaper.pdf. The response to that discussion paper is not yet available.
What is the greatest challenge in relation to AI at a worldwide level?
AI has and will continue to have a growing impact on society worldwide. IP is one small consideration of that impact. The greatest challenge for IP in that context will be to achieve some level of harmonisation worldwide. That is always difficult where stakeholders are at different levels of understanding and development. It is important however that this exercise is not considered purely as a patent system improvement exercise as opposed to a broader IP system improvement exercise.
You are co-editor of the Wolters Kluwer publication ‘International Licensing and Technology Transfer: Practice and the Law’. Could you give an insight in what it was like to be involved in this?
The publication has been going since 2008. The other co-editors (Peter Chrocziel and Russell Levine) and myself have been involved since the conception of the idea for the publication first arose, which was probably around 2006. That continuity ensures a consistency of vision for the publication which has always been practitioner focussed. Peter looks after Europe, Russell looks after the Americas and I look after Asia Pacific and have a general oversight. Each country’s section is updated on a regular basis by practitioners in the relevant jurisdiction. It then goes through a review process by the editor responsible for the relevant geographic area. Importantly what we have been seeking to do right from the beginning is to ensure that the content is always improved and is always relevant. That means that I and the other editors have dialogue with the contributors on their updates. As a bonus, that also means that we get to update ourselves on what is going on in the jurisdictions that we look after.
Is there anything else you’d like to mention?
History shows that the law is not good in keeping up with technological change. AI is such a societal game changer that if the law falls behind in this space it will be very hard to catch up.
Are you interested in becoming an editor/author for Kluwer IP Law, please contact Christine Robben or Anja Kramer.
_____________________________
To make sure you do not miss out on regular updates from the Kluwer Patent Blog, please subscribe here.
Kluwer IP Law
The 2022 Future Ready Lawyer survey showed that 79% of lawyers think that the importance of legal technology will increase for next year. With Kluwer IP Law you can navigate the increasingly global practice of IP law with specialized, local and cross-border information and tools from every preferred location. Are you, as an IP professional, ready for the future?
Learn how Kluwer IP Law can support you.
There are a million issues that should be harmonized internationally with higher priorities than this “artificial” problem. We can deal with it once the AIs start complaining about discrimination against them in the patent laws and threaten to go on strike….
Peter, thanks you for comment!
I was thinking, we can talk about AI-inventorship when the first AI gets itself an attorney and sues successfully for inventorship or ownership (presumably it didn’t sign a contract beforehand).
This whole disussion somehow reminds me of the PETA-suit to assign copyright to a monkey…
AI is a buzzword not to say a fallacy like plenty of other before: big data, IoT, you name it all.
Would not be possible to grab money from governments having been lulled in, the soufflé would have already depleted.
It is a wonderful playground for academics and lawyers, and it should stay there.
I do not deny that they have a future in some areas, especially with repetitive chores.
Intelligent is a misnomer, artificial not!
When thinking in patent terms, I doubt we will see big developments, be it for sufficiency reasons. What is precious is the combination training data and algorithm. Giving away this duality in order to obtain a patent will probably be a rare occurrence.
When one sees what DABUS has come up with, shaking one’s head is a sound reaction!
Personally, I find more interesting how to define in future the “person” of ordinary skill in the art to whom the patent is addressed and to whom is must be sufficient and enabling. Will it in future be ordinary for that hypothetical person to be one who routinely relies for help upon an AI, to figure out how to get something to work? After all, the AI’s of the future will find everything obvious, won’t they?
And when you read this it is perhaps unwise to retort: not in my lifetime they won’t. But if we do need to re-define the skilled “person”, the judges will surely take it all in their stride. Even if we still have humans doing the judging.
If there would not be a lot of money to grab with AI, it would have returned to where it should never have left, that is a playground for academics and legal scholars. Lobbies were quite efficient to drag money from governments. AI will most probably end up like other bubbles. I am thinking here of Big Data and IoT…
I am convinced that AI can be helpful when it comes to deal with repetitive chores like analysing pictures. But until a car will be intelligently driven a lot of water will run down rivers.
As far as patents are concerned, the requirements for sufficiency are such that not only the learning data but also the algorithm will have to be disclosed. I doubt that lots of applications will be filed and patents granted. See e.g. T 509/18 or T 161/18.
When one sees what DABUS managed to bring forward, one can have doubts about the impact of AI in matter of inventive step, beside publicity for the applicant.
In AI nothing is intelligent, but everything is artificial!
I agree with MaxDrei that the use of AI raises questions regarding the definition of the skilled person – data scientists will have to be part of the team – and whether the common knowledge of the skilled person should include familiarity with the use of AI tools. Which also raises questions as to which information relied upon in AI applications is “public” be it algorithms or data.
I also agree with Attentive Observer that patenting inventions using AI is confronted with big issues esp. as to the disclosure of training data. As the saying goes on : “AI is only as good as the datasets it’s trained on”. An applicant faces a difficult dilemma if the case law requires the disclosure of information which has been costly to produce. Hence the idea that confidentiality providing protection for the training datasets may be a more suitable option than patenting. But we are not there yet, let’s see where the BOAs will set the threshold in future décisions.
We might expect a „dynamic interpretation“!