21 December 2022

Will AI Chatbots Make Patent Attorneys Obsolete? (No, They Won’t)

AI guy If you have any interest at all in technology, and possibly even if you do not, it is very likely that over the past few weeks you have read or heard something about OpenAI’s ChatGPT chatbot, which was released for public testing on 30 November 2022.  OpenAI describes itself as ‘an AI research and deployment company’ with a mission ‘to ensure that artificial general intelligence benefits all of humanity.’  It counts Microsoft as a major sponsor, to the tune of a US$1 billion investment that gives OpenAI access to enormous computing resources, in exchange for which Microsoft gains privileged access to OpenAI’s breakthroughs.  ChatGPT’s conversational and question-answering skills are – superficially, at least – impressive.  Naturally, this has led some breathless commentators to suggest that we are on the verge of seeing various professional jobs, including those of lawyers, being replaced by AI machines.  But is ChatGPT really that good, or are people merely being blinded to its limitations by the fact that it is undeniably much better than anything they have seen before?

If language models like ChatGPT are going to replace professional advisors, such as lawyers and patent attorneys, then they will need to demonstrate the standards of competence and reliability (not to mention ethical conduct and responsibility) that the public expects from such advisors.  As we shall see, my experiments with ChatGPT demonstrate that it is nowhere near achieving these standards.  Indeed, there is a serious question as to whether it even represents a viable approach to developing AI with such capabilities.  In any event, there are no signs that machines will be replacing professional advisors in the foreseeable future.

A Brief Introduction to ChatGPT

ChatGPT is based on OpenAI’s GPT-3 family of large language models.  What does that mean?  The ‘3’ simply identifies this as the third generation of the series, while ‘GPT’ stands for Generative Pre-trained Transformer.  ‘Transformer’ is the name given to a certain type of deep learning network first described back in 2017 by a team of Google researchers in a (famous) paper entitled Attention is All You Need.  ‘Generative Pre-trained’ means pretty much what it says – the model is pre-trained to generate sequences of words.  It is a ‘language model’ in the sense that it embodies, within the billions of trainable parameters making up its network, a type of statistical model of the language(s) on which it is trained.  For example, one common training process involves feeding the model a passage of text in which some words have been masked out (i.e. replaced with ‘gaps’) and asking the model to ‘guess’ the missing words.  Through an extensive training process (some estimates place the cost of a single successful GPT-3 training run at up to US$12 million, although OpenAI presumably has unlimited access to the Microsoft Azure AI supercomputing infrastructure that it uses for training) the model eventually becomes quite good at these types of tasks.  It is then simply a matter of giving the model an initial prompt, and getting it to ‘predict’ the next word, and then the next, and the next… and so on, for as long as what comes out continues to make sense.

And the latest generations of GPT models can generate quite long passages of coherent text.  They have a sufficiently large number of parameters, and have been trained on such enormous volumes of text, that they encode relatively long-range patterns of language.  That is why you can ask ChatGPT to write a limerick, or a haiku, or a sonnet, and it will (often) produce output that, while lacking in poetic soul, will at least reproduce something close to the correct poetic form.

But while generative language models can appear almost magical in their abilities, the truth is that they are incapable of being ‘creative’ in any meaningful sense of the word.  Language models of this type have been characterised as ‘a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot’: On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜  (Infamously, the publication of this paper, which highlighted the ethical costs and dangers of large language models, led to the controversial departure of its second-named author, Timnit Gebru, from her job at Google, where she co-led a team on the ethics of artificial intelligence.  And, yes, the parrot emoji is part of the original title.)  Repeatedly predicting the most probable next word in a sequence of text, based on a 500 billion word data set sourced largely from the Common Crawl internet and content repository, is a purely procedural activity that involves no real intelligence, artificial or otherwise.

In the case of ChatGPT, the illusion of competence has been enhanced by optimising the model for conversation using a technique known as Reinforcement Learning from Human Feedback (RLHF).  This involved firstly fine-tuning the model by giving it examples of conversations in which both sides were written by human trainers.  Then the trainers ranked groups of alternative model-written responses, with the rankings being used to ‘reward’ the model for generating better replies.  Over multiple iterations of this process, ChatGPT learned to become a more convincing conversationalist.

So let’s put it to the test as a patent attorney!

Does ChatGPT Know Its Case Law?

Any competent Australian patent attorney should know something about one of the most famous High Court decisions in the field, commonly known as the ‘NRDC case’ (National Research Development Corporation v Commissioner of Patents [1959] HCA 67).  So I asked ChatGPT ‘What was the reasoning in the High Court's decision in the 1959 case involving the Commissioner of Patents and the National Research Development Corporation?’  It responded:

The 1959 case involving the Commissioner of Patents and the National Research Development Corporation, also known as the NRDC case, was a significant case in Australian patent law that addressed the question of what constitutes a manner of manufacture, which is a requirement for an invention to be eligible for a patent in Australia.

The case involved an application by the National Research Development Corporation (NRDC), a UK government-owned research and development company, for a patent in Australia for a process for purifying tungsten. The Commissioner of Patents refused the application on the grounds that the process was not a manner of manufacture, as it was a mere scientific principle or abstract idea that did not produce any material result or have any practical utility.

The NRDC appealed the decision to the High Court of Australia, which upheld the Commissioner's decision and rejected the patent application. In its decision, the High Court set out the test for manner of manufacture, which requires an invention to be a product or process that makes a material contribution to the technical field in which it falls, and that has practical utility. The Court held that the NRDC's process for purifying tungsten did not meet this test, as it was a mere scientific principle that did not produce any material result or have any practical utility.

The NRDC case is still frequently cited in Australian patent law as a leading authority on the requirement for an invention to be a manner of manufacture in order to be eligible for a patent.

Readers familiar with the case will know that, despite ChatGPT’s confident delivery, this response gets almost all of the details spectacularly wrong!  It is true that the NRDC case ‘addressed the question of what constitutes a manner of manufacture, which is a requirement for an invention to be eligible for a patent in Australia’ and that it ‘is still frequently cited in Australian patent law as a leading authority on the requirement for an invention to be a manner of manufacture’.  But the case involved a process of chemically treating crops to kill weeds and improve growth (nothing to do with purifying tungsten), and the NRDC’s appeal was successful, with the High Court overturning the Commissioner’s decision.  I am not sure where ChatGPT got its ‘facts’ from.  I found a 1928 ruling in which a patent relating to the purification of tungsten was invalidated because it claimed a naturally-occurring element.  However, that was a US case in the Third Circuit Court of Appeals involving an issued patent owned by General Electric being challenged by De Forest Radio Co – i.e. nothing to do with Australia, NRDC, any Commissioner of Patents, or a High Court.

So I set ChatGPT straight, responding ‘That is wrong. The NRDC case involved the treatment of crops, not the purification of tungsten. And the NRDC's appeal was successful.’  To its credit, it was happy to admit its error:

I apologize for the error in my previous response. You are correct that the NRDC case involved an application by the National Research Development Corporation (NRDC) for a patent in Australia for a process for treating crops, and not the purification of tungsten.

In the NRDC case, the Commissioner of Patents refused the patent application on the grounds that the process was not a manner of manufacture, as it was a mere scientific principle or abstract idea that did not produce any material result or have any practical utility. The NRDC appealed the decision to the High Court of Australia, which upheld the NRDC's appeal and granted the patent.

In its decision, the High Court set out the test for manner of manufacture, which requires an invention to be a product or process that makes a material contribution to the technical field in which it falls, and that has practical utility. The Court held that the NRDC's process for treating crops met this test, as it involved the application of a scientific principle to a practical problem in a way that produced a material result and had practical utility.

The NRDC case is a significant case in Australian patent law, and is frequently cited as a leading authority on the requirement for an invention to be a manner of manufacture in order to be eligible for a patent. I apologize for any confusion caused by my previous response.

Once prompted, then, ChatGPT is able to generate more appropriate text.  However, its repetition is notable.  I have highlighted passages in its corrected response that are verbatim reproductions from its original response.  This kind of rote repetition is characteristic of these kinds of generative language models, because once they get on a roll producing a particular passage of text they tend to make the same next word predictions and thus generate the same output sequence of words.

At this point, ChatGPT – the most sophisticated chatbot ever created – fails the so-called ‘Turing Test’.  In Alan Turing’s original formulation of his ‘imitation game’, a human interrogator is given the task of determining, based solely on written responses, which of two other parties is a human and which is a machine.  The machine passes the test if the interrogator cannot reliably distinguish between human and machine.  Quite aside from any other failings it may have, ChatGPT gives itself away by the confidence with which it delivers wrong answers, and its tendency towards verbatim repetition.

It is also worth noting that ChatGPT was not led astray in its initial attempt to explain the NRDC case by false information in its training data.  The most likely explanation is that its version was derived from two (or more) sources of factual information that it somehow conflated to produce a false narrative.  This suggests that the problems with these kinds of language models generating incorrect responses cannot be fixed merely by ensuring that all of the text used for training is factually accurate.  They have absolutely no regard for the truth and will go anywhere that the statistics of their training data leads them.

Can ChatGPT Provide Basic General Advice?

Perhaps asking about a specific case – however famous – was too hard a task.  But what about providing more general advice?  I asked ChatGPT ‘Are computer-implemented inventions patentable in Australia?’  It replied:

Yes, computer-implemented inventions are generally eligible for patent protection in Australia.

In Australia, an invention is eligible for a patent if it is a manner of manufacture, which means that it must be a product or process that is capable of industrial application. This includes inventions that involve the use of computers or other electronic devices.

Computer-implemented inventions may be eligible for a patent in Australia if they involve the use of a computer to perform a specific function, or if they involve a new and useful improvement to a computer or computer-related technology. In order to be eligible for a patent, the invention must meet the usual requirements for a patent, including novelty, non-obviousness, and usefulness.

It is worth noting that the patentability of computer-implemented inventions in Australia is not dependent on the presence of a physical device or product. An invention that is purely a software-based solution may still be eligible for a patent if it meets the necessary requirements.

However, it is important to note that certain types of computer-implemented inventions may be excluded from patentability in Australia, such as inventions that relate purely to a business method or financial product, or inventions that are considered to be a mere scheme or abstract idea. These exclusions are set out in the Patents Act 1990 and are intended to prevent the grant of patents for inventions that are not considered to be true inventions.

Aside from the last sentence (which I have highlighted) this answer is reasonably good, and might be quite suitable for use in a ‘frequently asked questions’ page on a patent attorney firm’s website.  However, the final sentence is incorrect – there are no express exclusions for business methods, financial products, schemes or abstract ideas ‘set out in the Patents Act 1990’.  These kinds of subject matter have, over many years, been found to be unpatentable by the courts under the principles developed for application of the ‘manner of manufacture’ test (as ChatGPT would have known, had it any real understanding of the NRDC case).

Once again this demonstrates that no matter how much of its output might be correct, ChatGPT simply cannot be relied upon as a professional advisor.  Since it is utterly incapable of distinguishing between truth and falsehood, it cannot be depended upon to provide information to people who are themselves unable to identify and correct its errors.

ChatGPT is no DABUS!

If ChatGPT is no substitute for a competent patent attorney, what about using it to replace inventors instead?  I asked it ‘Are you capable of true creativity?’  The answer:

As a language model, I am not capable of true creativity in the same way that a human is. My responses are generated based on patterns and information that I have been trained on, and I do not have personal experiences or feelings.

However, I can assist with the process of creative thinking and idea generation by providing information and suggestions based on my training and knowledge. This can include brainstorming ideas, providing prompts and exercises to stimulate creative thinking, and helping you understand concepts and principles related to creativity.

It's important to remember that true creativity comes from within an individual, and is the result of their own personal experiences, emotions, and perspectives. While I can provide guidance and assistance, only humans have the ability to truly tap into their own creativity and create something original and unique.

At least ChatGPT is realistic about its capabilities – to paraphrase ‘Dirty’ Harry Callahan, a bot’s got to know its limitations!  I next asked it ‘Have you heard of DABUS?’  It replied:

DABUS is a machine learning model developed by researcher and inventor Stephen Thaler. It was trained to generate novel and creative ideas by analyzing a large dataset of human-generated ideas and concepts. DABUS has been credited with inventing a number of novel ideas and designs, including a new type of emergency beacon and a device for transporting objects through air or water.

DABUS has generated significant media attention and sparked a debate about the role and capabilities of machine learning models in the process of inventing and creating new things. Some experts have argued that DABUS is capable of true creativity and should be recognized as the inventor of its creations, while others have cautioned that it is ultimately dependent on human input and guidance, and that its creations should be attributed to its human creators.

A major difference between ChatGPT and DABUS is that we can all judge the capabilities, and limitations, of ChatGPT for ourselves.  The principles behind its design and training have been published by OpenAI, and it is available online for anybody to interact with.  Its smaller predecessor, GPT-2, has been made available to the public, in the form of both code and pre-trained models and I expect that, in time, the same will be done for GPT-3 (not that many people will have the resources to run it).  This form of ‘open innovation’ is commonplace in the mainstream AI and machine learning research community. 

DABUS, meanwhile, is largely a mystery.  As far as I am aware, nobody other than Stephen Thaler has ever witnessed its alleged inventive activities.  No code or public demonstration is available.  The specification of the granted US patent that supposedly discloses an implementation of DABUS does not, on my reading, provide a sufficient description of anything that would reproduce the feats of invention claimed for DABUS.  In short, we are being asked to take it on faith that Thaler alone has achieved a breakthrough in AI that has eluded the largest technology companies in the world, with all of the resources that they have at their disposal.

The ‘Sagan standard’ – that extraordinary claims require extraordinary evidence – applies.  Until we have the same opportunity to inspect and challenge DABUS as we have with ChatGPT, I will remain a non-believer.

Conclusion – ChatGPT is no Threat to Our Jobs

ChatGPT generates a fairly convincing simulation of human language use.  However, being convincing and being correct are completely different things.  As OpenAI itself concedes, ‘ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers’, primarily because when training the model there is ‘no source of truth’.  That is to say, the training data (i.e. mostly web content) may contain correct information, incorrect information, or a mix of both, and the model has no way of distinguishing between what is true and what is not.  It simply generates plausible sequences of words, derived from its training data, without any understanding of the meaning of those sequences.

It is a matter of everyday experience that humans create meaning first, and then apply language to communicate that meaning.  The gap between thought and language might be small, but I certainly had the entire idea for this paragraph in mind before I began to set down the words to express it.  What we humans manifestly do not do is to start out with a single word, having no clear idea of where we are going, and simply select each subsequent word in turn, based on some innate rules of language structure, in the blind hope that the resulting sequence will convey some relevant meaning.  Yet that is exactly what ChatGPT does.  It is not intelligent.  It is not conscious.  It is, often, not even right.  It may seem to speak, but it has no idea what it is saying.  That is what it means to be a ‘stochastic parrot’.

Patent attorneys – and, indeed, all professional ‘knowledge workers’ – are relied upon for our expertise and our experience.  People rightly expect us to ‘know our stuff’, and to admit when we reach the limits of our knowledge and need to go away and conduct further research or inquiries.  But more than that, we are expected to be able to apply what we know creatively.  It is essential that we provide information and advice that is correct, but that is not enough.  We must also understand and address our clients’ problems, needs and concerns.  These may be unique, and require the selection, application and combination or knowledge in ways never before conceived.

ChatGPT is plainly incapable of such understanding and creativity.  But I would go further than that.  I would argue that the underlying technology upon which ChatGPT is built, i.e. the very premise of large language models trained to embody the patterns and structures of language, is not even a step along the road to machines that are intelligent in the sense that they can comprehend and make meaning in creative ways.  There is no apparent evolutionary pathway from stochastic parrots to thinking machines.

Large language models are mindless, implying that they can only do mindless tasks.  That is no bad thing.  Many of us would be glad to be relieved of those mindless tasks so that we can spend our time and energy on activities that actually require the exercise of our intellects.  We need not fear that ChatGPT or its progeny will be replacing professional advisors in the foreseeable future.  It is far more likely that they may be deployed to make our lives easier.

Postscript – Patent Attorney Poetics

Finally, I asked ChatGPT to ‘Please write me a 14 line sonnet about the joys of being a patent attorney’ (while tautological, I found that if I did not request 14 lines, it would insist on producing four four-line stanzas instead of a proper sonnet).  This is what it wrote – I think we can agree that ChatGPT is no Shakespeare!

The joy of being a patent attorney lies
In helping others bring their ideas to life
The thrill of solving complex legal ties
And aiding the creation of new inventions rife

The satisfaction of seeing a patent granted
To a client whose hard work has finally paid
The opportunity to make a difference, planted
In the knowledge that their innovation will be stayed

The pride in knowing that the fruits of labor
Will lead to progress, advancement, and growth
The honor of safeguarding intellectual honor
And the chance to contribute to society's oath

So if you love the law and innovation's spark
Consider becoming a patent attorney, embark

0 comments:

Post a Comment


Copyright © 2014
Creative Commons License
The Patentology Blog by Dr Mark A Summerfield is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Australia License.