13 January 2018

The Impact of Machine Learning on Patent Law, Part 1: Can a Computer ‘Invent’?

BrainAs a product of millions of years of evolution, the human brain is a remarkable organ.  Recent research indicates that a typical brain comprises somewhere in the vicinity of 80 to 100 billion neurons, and a roughly equal number of non-neuronal cells.  This mass of biological matter is capable of astonishing feats – many of them simultaneously – from enabling us, consciously and unconsciously, to control the behaviour and movement of our bodies, to sensing, comprehending and interacting with the environment around us, to communicating with one another using a variety of languages and symbols, to creating, composing and inventing brand new works of science, technology, and art.  In performing all of these tasks, the brain consumes just 20 watts of power.  By way of comparison, microprocessors at the high end of Intel’s latest Core i7 range consume up to 140 watts.

One relatively recent product of the amazing human brain is the range of technologies often collectively called ‘artificial intelligence’ (AI).  That is the last time I will use this particular phrase without irony in this series of articles – in my view, it is too vague a term, and tends to create an impression that computers are somehow approaching the capacity to operate on-par with human intelligence, which is simply not true.  Nonetheless, such luminaries as Stephen Hawking and Elon Musk have piped up over the past year or so with their concerns that our machines may soon rise up and render us obsolete or, worse still, destroy us!

In a similar vein, there are some people in the field of intellectual property who are starting to ask questions about whether computers can be ‘creative’ or ‘inventive’ and, if so, whether it should be possible for a computer to be named as an inventor on a patent application – or, conversely, whether some humans should be disentitled from inventorship on the basis that their computers, rather than themselves, were the true inventors.  One academic who has been making a name for himself in this emerging field of study is Professor of Law and Health Sciences at the University of Surrey, Ryan Abbott.  Professor Abbott is the author of, among other works on the topic, ‘I Think, Therefore I Invent: Creative Computers and the Future of Patent Law’, Boston College Law Review, Vol. 57, No. 4, 2016 (also available at SSRN), in which he argues that the law should embrace treating non-humans as inventors because this ‘would incentivize the creation of intellectual property by encouraging the development of creative computers.’

As I shall explain, however, I do not agree with Professor Abbott that computers can, or should, be regarded as inventors for the purpose of granting patents.  Furthermore, while Abbott accepts claims that patents have already been granted on what he calls ‘computational inventions’, I firmly believe that a computer is yet to ‘invent’ anything.  In my view, the researchers and technologists who claim otherwise have an interest in promoting a particular perspective, and in doing so they are subtly extending the definitions of ‘creation’ and ‘invention’ to encompass the contribution of their machines, to the detriment of the human operators who are responsible for providing the true creative input in the process.

I am further concerned that, should this view of ‘machine as (co)inventor’ prevail, it will in fact be to the detriment of the patent system.  I think it highly unlikely that lawmakers – whether they be legislators or common-law judges – will embrace the idea of granting patents on machine inventions.  On the contrary, it seems far more probable that if the notion takes hold that computers are actually doing the ‘inventing’ in many cases, it will simply become even more difficult for humans to secure patent protection for computer-implemented, or computer-assisted, inventions.

This is a complex topic that I intend to cover in a series of three articles.  In this first part, I will introduce the field of machine learning, give some examples, and then attempt to dispel some of the hype that has developed around this technology – including in Abbott’s work.  My aim here is primarily to refute the argument that existing machines are capable of engaging in ‘creative’ or ‘inventive’ activity.  In part 2, I will delve into the role of machine learning in assisting with the generation of new inventions.  Finally, I will look at how to go about identifying the (human) inventors in such cases.

A Brief Introduction to Machine Learning

Many of the ‘intelligent’ computers we hear about these days are examples of ‘machine learning’ (ML) systems.  The general concept of ML is not difficult to grasp.  ‘Traditional’ computer programming (of which there is still a great deal) involves implementing algorithms that are more or less specific to the particular task at hand, for example to process input data, perform calculations and decision-making based on that data, and generally produce a useful output.  A conventional search engine, for example, takes some key words or phrases as input, processes these to generate a query to a database which contains all of the search documents and indices, retrieves the corresponding results, ranks them according to some criteria, and then presents them to the user.  The programmer must produce code that explicitly implements each of these steps according to some set of defined algorithms.

Artificial brainMachine learning systems take a different approach.  Rather than explicitly programming a computer to perform a particular task, an ML system uses a learning algorithm through which some internal state of the system is configured in response to input data.  The internal state represents what the machine has ‘learned’ from patterns in the input data, without there being any need for the algorithm to include any explicit coding based on what the input data ‘means’, or for the programmer to explicitly define (or even to know) what patterns the machine should look for in the data.

By way of example, an ML implementation of a search engine could be developed to learn from the successes and failures of a conventional search engine.  Instead of using explicit algorithms to convert keywords into database queries, and results into search rankings, an ML system could be built using representations of keywords and corresponding successful and unsuccessful search results (i.e. those selected or ignored by users) as a training data set.  Given enough training data, and the right set of ML algorithms and parameters, such a system can learn to deliver the most relevant results in response to a very wide range of keyword inputs.  No code has to be written to map keywords to queries or to rank search results.  Instead, the creative work of the developer lies in the selection of algorithms and parameters for the ML system, and in designing the way in which keywords and other inputs are represented in order to obtain the best results.  And once the ML search system is up and running, it can continue to learn from its own successes and failures, in order to constantly improve its performance, and adapt to changes in database contents and user interests.

The hype around ML systems is not undeserved.  Researchers and engineers working in this field have achieved some astonishing results in recent years, including leaps forward in automatic translation of human languages and self-driving vehicles.  However, reports of these achievements intended for consumption by a non-technical audience – and often driven by marketing considerations – can contribute to an impression that these systems are far cleverer, and closer to achieving human-like ‘intelligence’, than is actually the case.  (Don’t even get me started on the ‘technological singularity’!)  A reality check may therefore be in order before proceeding.

The World’s Best ‘Go’ Player is Now a Machine

Go Board. Image credit - Hoge Rielen, via Wikimedia CommonsOne of the most prominent recent examples of a successful ML application is the Google DeepMind AlphaGo system, which plays the board game Go – a game that is mathematically more complex than chess.  In March 2016, AlphaGo defeated 18-time world Go champion Lee Sedol by four games to one.  A paper describing the implementation of AlphaGo was published in the journal Nature – ‘Mastering the game of Go with deep neural networks and tree search’, Nature, Vol. 529, 28 January 2016 (PDF copy available, 2,620kB).  The version of AlphaGo that defeated Sedol ran on a system comprising 1920 general-purpose processors and 280 graphics processing units (GPUs), and by a rough calculation therefore consumed around a megawatt of power (not including the power it consumed while learning to play Go, or the air conditioning required to keep all that hardware cool).  For all this, AlphaGo could do only one thing – albeit extraordinarily well – and Lee Sedol, using only the Go-playing part of his 20W brain, was still able to win one game.

Since then, AlphaGo has become better and, thanks to more efficient algorithms and new hardware designed specifically for the kinds of tasks required by its ML computations, much more energy-efficient.  It now requires only the power of 50-100 human brains, or a domestic air conditioning unit, to play the best game of Go in the world.  But that is still all it does.

The Time a Machine Won the Game Show ‘Jeopardy!

Watson. Image credit - Atomic Taco, via Wikimedia CommonsAnother celebrated example of a machine besting humans at their own game is the 2011 victory by IBM’s Watson supercomputer over two champion players in the quiz show Jeopardy!.  This was impressive, because clues in the game (which must be ‘answered’ in the form of a question) may come from a wide range of subject areas, are presented with minimal context, and are expressed in natural language.  To win, Watson needed to be able to interpret the clue, search its huge database of documents (including the entirety of Wikipedia) for relevant information, extract the key data, evaluate the likelihood of correctness, formulate a response, and ‘buzz in’, ahead of either of the two human competitors.

Watson is a system built to apply a range of advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open-domain question answering.  It is, in a sense, the ultimate research assistant, capable of answering almost any question… so long as the relevant information is in its massive database.  While it can identify and combine information from multiple sources – and do it much faster than any human researcher could – its operation is entirely deterministic.  Given the same question, and the same database contents, any given incarnation of Watson will always produce the same answer.  Certainly, improved versions of Watson will (and have) over time become capable of providing better answers, but that is due to the ingenuity and hard work of Watson’s human developers.

The version of Watson that won Jeopardy! ran on 10 refrigerator-sized racks of IBM POWER 750 servers, using 15 terabytes of RAM and 2,880 processor cores.  It consumed 200 kilowatts of power in order to beat two 20W human brains, but was unable to engage in free-form banter with the host… or to play Go.

Have Machines Already ‘Invented’ Things?

Returning to the question of ‘computational invention’, a key assertion made by Abbott is that computers already independently or autonomously create potentially patentable inventions.  He offers three examples in support of this claim:
  1. the ‘Creativity Machine’, which was developed in the early 1990’s by computer scientist Stephen Thaler;
  2. the ‘Invention Machine’, a term for systems based on ‘genetic programming’ (GP), coined by Dr John Koza; and
  3. IBM’s Watson.
In all three cases, however, I fear that the claims of ‘creativity’ and ‘inventiveness’ – which invariably originate with the developers and their associates – are more marketing hype than a fair and accurate assessment of technical capability.

The ‘Creativity Machine’

Thaler’s ‘Creativity Machine’ is based on a neural network – the same general type of ML structure that lies at the heart of AlphaGo.  As I have already explained, while such systems can ‘learn’, by updating their internal state in response to training data and/or past performance, for a given (trained) state they are entirely deterministic, in the sense that the same inputs will always generate the same output. 
Neural Network. Image credit - Chrislb, via Wikimedia Commons
Thaler’s idea was to randomly perturb the state of a trained neural network with ‘noise’, to see if this would then result in it doing something unexpected, new and useful.  Here is an excerpt from his own description of the resulting invention, for which US patent no. 5,659,666 was granted in 1997:

Whereas the discovery of just how to adjust the noise level within a trained neural network to produce new ideas is a significant scientific finding, a viable patent was not achieved until a critic algorithm was added, whether heuristic, Bayesian, or neural network based, to monitor for the very best notions emerging from the perturbed network. This is the preferred embodiment of the invention called a Creativity Machine, a “dreaming” network, “imagination engine,” or “imagitron” that is monitored by another constantly vigilant algorithm that we appropriately call an ‘alert associative center’.

Numerous exceptional claims for the Creativity Machine are reproduced in Abbott’s paper.  These include: that ‘the two artificial neural networks mimic the human brain’s major cognitive circuit: the thalamo-cortical loop’; that ‘like the human brain, the Creativity Machine is capable of generating novel patterns of information rather than simply associating patterns, and it is capable of adapting to new scenarios without additional human input’; and that the Creativity Machine was actually responsible for inventing the subject matter of Thaler’s second patent. 

If all of this sounds more like marketing hype than sound academic evaluation, there may be a good reason for that: Thaler, his associates, and/or documents on the web site of Thaler’s company, Imagination Engines Inc., are the direct or ultimate source of every one of these extraordinary claims.

Genetic Programming – the ‘Invention Machine’

GP crossover. Image credit - U-ichi, via Wikimedia CommonsAbbott’s second example of existing ‘computational invention’ is a technology known as ‘Genetic Programming’ (GP).  As the name suggests, the basic idea of GP is to develop computer programs structured in such a way that they are able to ‘evolve’.  As with other ML systems, such programs generally have inputs and outputs corresponding with some specific pre-defined task.  A ‘genetic program’ may be represented, for example, as a set of mathematical or other operations organised into a data structure, and which act on the inputs in order to generate the outputs.  Modifying the data structure thus changes the program, and its outputs.  Various rules (i.e. ‘genetic algorithms’) can be applied to produce ‘generations’ of such programs, where each new generation consists of variations on the previous generation, and to ‘select’ better-performing variations to carry their code forward into further generations.

According to Abbott, the USPTO granted US patent no. 6,847,851 on an invention that was ‘created by the “Invention Machine”— the moniker for a GP-based AI developed by John Koza.’  Abbott goes on to describe Dr Koza as ‘a computer scientist and pioneer in the field of GP’ who ‘claims the Invention Machine has created multiple “patentable new invention[s].”’  I have a number of issues with this description. 

Firstly, as far as I have been able to ascertain, there is no single ‘GP-based AI’ that Koza, or anyone else, has developed and called ‘the Invention Machine’.  Rather, ‘Invention Machine’ appears to be a generic term used by Koza to describe GP as applied to generating new solutions to technical problems (see Genetic Programming is an Automated Invention Machine on Koza’s web site genetic-programming.com).  Koza contends that he is ‘considered the inventor’ of GP, although the Wikipedia article on the topic identifies significant prior work on the subject dating back to the 1950s.

Since 2004, Koza has been running an annual competition called the ‘Human-Competitive Awards’ (or ‘Humies’), in which cash prizes are given for applications of GP which result in solutions to real-world problems that are competitive with the work done by creative and inventive humans.  There are a number of alternative criteria for assessing human-competitiveness, including that ‘the result was patented as an invention in the past, is an improvement over a patented invention, or would qualify today as a patentable new invention.’  In 2010, Koza published an article, ‘Human-Competitive Results Produced by Genetic Programming,’ Genetic Programming & Evolvable Machines, Vol. 11, March 2010 (PDF copy available, 400kB), in which he summarised the various successes of entrants in the Humies.  This one paper is Abbott’s sole source for his assertion that ‘by 2010, there were at least thirty-one instances in which GP generated a result that duplicated a previously patented invention, infringed a previously issued patent, or created a patentable new invention.’

It is therefore apparent that Koza – a person with a clear interest in promoting the virtues of GP and his own contributions to the field – is the ultimate source of every one of the claims documented by Abbott that GP is an ‘Invention Machine’.

Which is More Creative – Watson or IBM’s Marketing Team?

As Abbott notes, ‘IBM describes Watson as one of a new generation of machines capable of “computational creativity”’ (emphasis added).  Abbott also quotes IBM documents which contend that such machines generate ‘ideas the world has never imagined before’ and that Watson ‘generates millions of ideas out of the quintillions of possibilities, and then predicts which ones are [best], applying big data in new ways.’

To misquote Mandy Rice-Davies, ‘well, they would say that, wouldn’t they?’  IBM markets a range of Watson-based products and services to a variety of potential customers – including Australian government agencies such as IP Australia – to provide ‘actionable insights from large amounts of unstructured data via natural language processing and machine learning.

This is not to say that Watson is not remarkable, however a more sober analysis concludes that Watson is not some ‘super-smart Siri’, but IBM’s branding for ‘a wide range of AI techniques and related applications’ that it is offering to a growing market of organisations looking for ways ‘to apply these techniques, and to tap into the expertise required to do so.’

A Practical – and Fun – Example of ML in Action

If you are finding all of this discussion of ML somewhat abstract, an excellent and accessible demonstration of the use of neural networks, genetic algorithms – and, indeed, of the design process for ML systems more generally – can be seen in the YouTube video embedded below.  The video demonstrates the operation of ‘MarI/O’, an ML system configured to learn how to play the old Nintendo game Super Mario World.  I hope you will take the five minutes or so necessary to watch the video in full.  It is not only educational, but also highly entertaining, to follow the way in which the creator, ‘SethBling’, used a combination of neural networks and genetic algorithms to enable a computer to learn how to successfully complete a level of the game using only the inputs available to a human player, i.e. the eight buttons on the game controller.

Conclusion – Our Machines Are Not Inventors

I do not believe that any of the above examples demonstrate machine ‘invention’ or ‘creativity’.  The fundamental problem with the proposition is simple: in every single case, a discrete and limited number of input parameters is transformed into a corresponding discrete and limited number of output parameters via a specific set of computational functions.  This kind of deterministic behaviour is not something that we would normally describe as ‘creative’, even if a particular system is sufficiently complex that we might regard the results as ‘surprising’.  In general, the success or failure of an ML system is determined by the choices of inputs, outputs, and ML algorithms, made by a human designer. 

Even those algorithms – such as Thaler’s ‘Creativity Machine’, or genetic programming – that inherently inject randomness into the system, must nonetheless be designed to determine which random developments to keep, and which to discard.  Furthermore, the randomness in such systems is little more than a ‘trick’ to sample an enormous space of possible parameter variations where it would be computationally infeasible to exhaustively search all possibilities for better results.  There is nothing new in using random variation for heuristic optimisation: well-known probabilistic algorithms such as simulated annealing, stochastic approximation, and stochastic gradient descent have been around for decades.  Having the machine randomly attempt different variations in the hope that some may produce an improved outcome is nothing more than large-scale trial-and-error.  It is, again, not something that we would normally regard as involving any ‘creativity’, no matter how pleasantly surprising some of the results might be.

Nonetheless, Koza’s work demonstrates that computers can be programmed to generate solutions to problems that, were they devised through conventional processes of human thought and ingenuity, would potentially be regarded as patentable inventions.  Does this mean that the programmed computer should be regarded as ‘inventive’?  I do not think so.  What I think it means is that computers can be programmed to efficiently search a well-defined solution-space within a narrow field, using algorithms and parameters devised by human designers, by learning patterns based upon past performance.  This is neither creative nor inventive on the part of the computer – unless you redefine ‘creativity’ and ‘invention’ by reference purely to the end result, rather than the process by which the result is achieved.  This redefinition is what the likes of Abbott, Thaler and Koza are engaged in when they argue that machines are capable of invention. 

Abbott’s paper contains no credible, objective, independent evidence for ‘computational invention’.  Indeed, Abbott’s arguments in support of this concept not only rely on self-interested and subjective claims, but merely beg the question.  It can hardly be possible to identify machine ‘creativity’ or ‘inventiveness’ in the absence of meaningful working definitions of these terms.  These are matters that philosophers have sought to address for – literally – millennia.  Lawyers and computer scientists therefore have a great deal of catching up to do!  As Berkeley philosophy professor John R Searle wrote in the New York Review of Books in the course of reviewing Superintelligence: Paths, Dangers, Strategies by Nick Bostrom:

…the prospect of superintelligent computers rising up and killing us, all by themselves, is not a real danger. Such entities have, literally speaking, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior.

Much the same can be said of computers engaging in creative or inventive activity.  What we observe is a behaviour that superficially mimics invention, although none of the psychological characteristics of human creativity or inventiveness is present.

But does this not leave us in a quandary?  If computers cannot invent, and yet the outcome of running a computer program can be an invention, then who – if anyone – is the inventor?  This is where I will pick up in the next article.

0 comments:

Post a Comment


Copyright © 2014
Creative Commons License
The Patentology Blog by Dr Mark A Summerfield is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Australia License.