A federal court in Australia has ruled that AI systems can be credited as inventors under patent law in a case that could set a global precedent.
Ryan Abbott, a professor at University of Surrey, has launched over a dozen patent applications around the world – including in the UK, US, New Zealand, and Australia – on behalf of US-based Dr Stephen Thaler.
The twist here is that it’s not Thaler which Abbott is attempting to credit as an inventor, but rather his AI device known as DABUS.
“In my view, an inventor as recognised under the act can be an artificial intelligence system or device,” said justice Jonathan Beach, overturning Australia’s original verdict. “We are both created and create. Why cannot our own creations also create?”
DABUS consists of neural networks and was used to invent an emergency warning light, a food container that improves grip and heat transfer, and more.
Until now, all of the patent applications were rejected—including in Australia. Each country determined that a human must be the credited inventor.
Whether AIs should be afforded certain “rights” similar to humans is a key debate, and one that is increasingly in need of answers. This patent case could be the first step towards establishing when machines – with increasing forms of sentience – should be treated like humans.
DABUS was awarded its first patent for “a food container based on fractal geometry,” by South Africa’s Companies and Intellectual Property Commission on June 24.
Following the patent award, Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, commented:
“This is a truly historic case that recognises the need to change how we attribute invention. We are moving from an age in which invention was the preserve of people to an era where machines are capable of realising the inventive step, unleashing the potential of AI-generated inventions for the benefit of society.
The School of Law at the University of Surrey has taken a leading role in asking important philosophical questions such as whether innovation can only be a human phenomenon, and what happens legally when AI behaves like a person.”
AI News reached out to the patent experts at ACT | The App Association, which represents more than 5,000 app makers and connected device companies around the world, for their perspective.
Brian Scarpelli, Senior Global Policy Counsel at ACT | The App Association, commented:
“The App Association, in alignment with the plain language of patent laws across key jurisdictions (including Australia’s 1990 Patents Act), is opposed to the proposal that a patent may be granted for an invention devised by a machine, rather than by a natural person.
Today’s patent laws can, for certain kinds of AI inventions, appropriately support inventorship. Patent offices can use the existing requirements for software patentability as a starting point to identify necessary elements of patentable AI inventions and applications – for example for AI technology that is used to improve machine capability, where it can be delineated, declared, and evaluated in a way equivalent to software inventions.
But more generally, determinations regarding when and by whom inventorship and authorship, autonomously created by AI, could represent a drastic shift in law and policy. This would have direct implications on policy questions raised about allowing patents on inventions made by machines further public policy goals, and even reaching into broader definitions of AI personhood.
Continued study, both by national/regional patent offices and multilateral fora like the World Intellectual Property Office, is going to be critical and needs to continue to inform a comprehensive debate by policymakers.”
Feel free to let us know in the comments whether you believe AI systems should have similar legal protections and obligations to humans.
Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.