In the past few days, there has been a disproportionate level of attention given to a somewhat academic study of the costs imposed upon the US economy by purported patent ‘trolls’, or 'non-practicing entities' (NPEs).
The study, conducted by Boston University law researchers James Bessen and Michael Meurer, is entitled The Direct Costs from NPE Disputes, and a working draft is available from SSRN.
As it has been presented in the technology media (see, e.g., US patent trolling costs $29b: study and Patent trolling cost the US $29 BILLION in 2011), the study shows that patent trolls impose a huge burden on innovation, and that this is further proof of our ‘broken’ patent system. This is great headline fodder (or click bait), but does it really add up?
Reading the full paper by Bessen and Meurer raises, for us at least, a number of issues, concerns and questions which are (unsurprisingly) absent from the bulk of the media coverage. Here are just a few…
The study, conducted by Boston University law researchers James Bessen and Michael Meurer, is entitled The Direct Costs from NPE Disputes, and a working draft is available from SSRN.
As it has been presented in the technology media (see, e.g., US patent trolling costs $29b: study and Patent trolling cost the US $29 BILLION in 2011), the study shows that patent trolls impose a huge burden on innovation, and that this is further proof of our ‘broken’ patent system. This is great headline fodder (or click bait), but does it really add up?
Reading the full paper by Bessen and Meurer raises, for us at least, a number of issues, concerns and questions which are (unsurprisingly) absent from the bulk of the media coverage. Here are just a few…
- If it is indeed true that patent trolls exact a $29 billion ‘tax’ on the US economy, then this is certainly cause for alarm. But does this figure really pass the ‘smell test’, or is it just too implausible to take seriously? If it is wrong, then this study is adding to the hysteria around purported problems with the patent system without due cause. When the figures in the study are stacked up against the total number of technology companies operating in the US, and the total R&D expenditure, it is frankly difficult to believe that the results are a true reflection of reality.
- The raw data for the study comes from RPX Corporation, a ‘patent aggregator’ which offers ‘defensive buying, acquisition syndication, patent intelligence and advisory services’. Basically, RPX acquires patents (just like a ‘troll’), but with the stated intent of using them to remove trolls from the market, and to assist the victims of trolls. Companies pay to become RPX ‘members’ for not-insubstantial fees. The survey data used in the study is from RPX clients, or other associated firms, and the broader litigation data is from RPX’s own database, selected and compiled according to its own criteria. While the study’s authors are keen to point out that RPX had no say in how they used the data, or presented their research, they are nonetheless completely dependent on information that is unlikely to be free from selection bias.
- There is no differentiation in the study (because there is no differentiation in RPX’s data) between different kinds of NPE. RPX uses the term to encompass patent assertion entities (i.e. organisations whose primary business model is to acquire and assert patents in order to obtain settlement and license fees) as well as individual inventors, universities, and non-competing entities (i.e. operating companies asserting patents well outside the area in which they make products and compete). Not all of these entities are patent ‘trolls’. Indeed, it may be that the true ‘trolls’, i.e. those entities which make absolutely no contribution to innovation within the economy, are in a minority.
- The statistical methods employed in the study are opaque, and lacking in any sensible or meaningful assessment of error or confidence. For all we can determine from the published data, the number ‘$29 billion’ could mean ‘anywhere between $100 million and $100 billion’. Or it could mean something else entirely. People who perform these kinds of analyses need to start to understand a simple fact: if you cannot establish the ‘error bars’ on your results, they are meaningless to a statistically-informed reader, and worse than meaningless to the lay person, who may treat them as accurate and precise.