The greatest challenge for IP in the area of artificial intelligence (AI) will be to achieve some level of harmonisation worldwide. IP expert Adam Liberman has said this in the second part of a series of interviews with authors and editors of Kluwer IP Law. Liberman is director IP advantage at Deloitte and adjunct professor at the University of New South Wales. He has over forty years’ experience in advising on intellectual property, licensing, commercial and corporate matters.
To begin with could you briefly explain what is “artificial intelligence” or AI?
Firstly, one has to understand that the topic of “artificial intelligence” is an extremely complex one. That complexity brings with it numerous views on how “artificial intelligence” is or should be defined or whether it should be defined at all. Some examples illustrate the point:
“an algorithm or machine capable of completing tasks that would otherwise require cognition” (Abbott)
“a machine that behaves in ways that would be called intelligent if a human was so behaving” (McCarthy)
“AI can be understood as computer functionality that mimics cognitive functions associated with the human mind (eg the ability to learn)” (Response from various parties including IBM in the USPTO, Public Views on Artificial Intelligence and Intellectual Property Policy October 2020)
“AI is a collection of technologies that combine data, algorithms and computing power” (European Commission White Paper on Artificial Intelligence, Brussels 19.2.2020)
“Undue effort should not be expended on defining AI, which is dynamic and will be subject to fundamental change in the coming years” (Response from various parties including Ericcson in the USPTO, Public Views on Artificial Intelligence and Intellectual Property Policy October 2020)
Given that exploring the definition of AI may not be of practical assistance in discussing AI in the context of patents, it is easier to get to the heart of the problems that arise in the patent context to consider whether AI assists in generating an invention, and as such is a tool in the inventive process or whether AI can autonomously generate an invention and as such can be viewed as the inventor.
What are the implications of AI being used as a tool to develop an invention?
The requirement for there to be an “inventor” is key to most patent legislation. “Inventorship” requires that the relevant party contribute to the “conception” of an invention. Where AI is only used as a tool to develop an invention, then AI could not be considered to be an “inventor”. In that context, the activities of the natural person who used the AI tool would need to be considered to determine the question of “inventorship”. Examples of AI being used as a tool include where AI functions as a calculator, or retriever or analyser of data.
What are the implications of AI autonomously generating an invention?
Problems do arise where AI autonomously generates an invention -ie where it allegedly conceives or contributes to the conception of an invention. The DABUS/Thaler cases mentioned below consider the problem head on – ie can AI under relevant current patent legislation be considered to be an “inventor”? Before considering those cases, it is important to note that there is no universal acceptance of the view that the current state of AI can conceive or can contribute to the conception of an invention.
Could you briefly outline the nature of the DABUS/Thaler cases?
As a test case, Stephen Thaler lodged various patent applications identifying DABUS AI (a system he built) as the inventor of the relevant invention. The applications resulted in rulings by the UKIPO, the UK High Court in Thaler v the Comptroller of Patents et al, now on appeal, the EPO, now on appeal to the EPO Legal Board of Appeal, the USPTO, now on appeal to the US District Court for the Eastern District of Virginia, IP Australia, with an appeal to the Australian Federal Court. The essence of all the first instance decisions was that the relevant current legislation did not allow for a non human inventor. Had Thaler identified himself as the inventor rather than DABUS, the applications would have proceeded in the normal manner and probably to grant. The DABUS/Thaler case has given rise to speculation that there have been numerous patents granted where AI autonomously generated inventions have been developed, but for prudence sake a human has been identified as the inventor rather than the relevant AI. That approach could prove problematic if inventorship is later challenged.
What patent issues apart from “inventorship” need to be considered in the context of AI autonomously generated inventions?
Without being exhaustive, some of the issues to consider include the following: (a) Who is to be the owner of any patent? Amongst the options are, the AI system itself; the owner of the AI system; the developer of the AI system; the user of the AI system; the supplier of data to the AI system; the financier or investor who funded the creation of the AI system; the person by whom the arrangements necessary for the conception or creation of the invention are undertaken.
Assuming that policy considerations will mitigate against the AI system itself being the owner, it is not unlikely that a combination of the above parties may have contributed to the conception or creation of the invention and that there could be multiple such parties who contribute in varying degrees. Ultimately settling on who should have the best entitlement will need to be a pragmatic policy decision but one which ideally is harmonised internationally.
Ryan Abbott, a well-known commentator and author of The Reasonable Robot: Artificial Intelligence and the Law, identifies the AI owner as the preferred default party so as to minimise transaction costs. (b) Will the reference point for inventiveness or obviousness need to be changed? That is, will there be a need to change the reference point from “a person skilled in the art” to “a machine trained in the art”. Where AI is merely used as a tool, the predominant view expressed in the UK government published report in its response to a consultation on AI and IP, was that it was not necessary to change the reference point as AI would be part of the skilled persons tool kit, but where independent AI invention arose, that would be one of the many issues that needed to be reconsidered.
On 23 March 2021, the UK government published its response to a consultation on AI and IP. What was the most interesting aspect of that report to you?
Firstly, the diversity of views as to the developmental stage of AI by the various respondents to the consultation – the predominant view appeared to be that AI had not advanced beyond the “tool” stage and that inventive AI was a long way off. Secondly, the fact that there was a clear recognition that the impact of AI extended beyond patents, to copyright, designs and incidentally to trade secrets. One of the IP subject areas however that I think they missed out on was trademarks. Thirdly, how seriously the UK government appears to view this evolving area – eg “ …we will build on the suggestions made…and consult later this year on a range of policy options, including legislative change for protecting AI generated inventions which would otherwise not meet inventorship criteria”.
The abovementioned European Commission White Paper considered the challenges posed by AI (see also this article on the Kluwer Patent Blog). The USPTO published its own report last year. Can you make a comparison between the two reports?
It is difficult to make a comparison between these two reports, largely because we are not comparing like with like. The European Commission White Paper takes a holistic view on the impact of AI on the socio economic position of Europe, with no reference to IP, whereas the USPTO report is clearly IP centric, more akin to the abovementioned UK report, but without containing a government response. Usefully, the USPTO report takes a deeper dive into trademark, trade secret and database issues than the UK report. Importantly the USPTO position is that there still is a lot to be learned about how best to understand and deal with AI in the IP context.
What is the position in Australia regarding AI?
The Australian Government has initiated a consultation process via a discussion paper – “An AI Action Plan for all Australians” https://consult.industry.gov.au/digital-economy/ai-action-plan/supporting_documents/AIDiscussionPaper.pdf. The response to that discussion paper is not yet available.
What is the greatest challenge in relation to AI at a worldwide level?
AI has and will continue to have a growing impact on society worldwide. IP is one small consideration of that impact. The greatest challenge for IP in that context will be to achieve some level of harmonisation worldwide. That is always difficult where stakeholders are at different levels of understanding and development. It is important however that this exercise is not considered purely as a patent system improvement exercise as opposed to a broader IP system improvement exercise.
You are co-editor of the Wolters Kluwer publication ‘International Licensing and Technology Transfer: Practice and the Law’. Could you give an insight in what it was like to be involved in this?
The publication has been going since 2008. The other co-editors (Peter Chrocziel and Russell Levine) and myself have been involved since the conception of the idea for the publication first arose, which was probably around 2006. That continuity ensures a consistency of vision for the publication which has always been practitioner focussed. Peter looks after Europe, Russell looks after the Americas and I look after Asia Pacific and have a general oversight. Each country’s section is updated on a regular basis by practitioners in the relevant jurisdiction. It then goes through a review process by the editor responsible for the relevant geographic area. Importantly what we have been seeking to do right from the beginning is to ensure that the content is always improved and is always relevant. That means that I and the other editors have dialogue with the contributors on their updates. As a bonus, that also means that we get to update ourselves on what is going on in the jurisdictions that we look after.
Is there anything else you’d like to mention?
History shows that the law is not good in keeping up with technological change. AI is such a societal game changer that if the law falls behind in this space it will be very hard to catch up.