Generative AI’s Intellectual Property Problem Heats Up

AIs producing art or inventions have to navigate a hostile legal landscape, and a consensus is far away

3 min read

An illustration of a robot hand holding a brush, outstretched to a painting canvas with a copyright symbol
iStock

AI-crafted inventions and AI-generated works of art face an immediate problem: Patent law and copyright law were crafted by humans, for humans. Intellectual-property law, as the world understands it, explicitly doesn’t recognize nonhuman creators. For many intellectual-property experts, that’s a problem that the world will increasingly have to face as large language models like ChatGPT or Bard grow more sophisticated. And there’s no sign of an emerging consensus of how laws might be reformed, adding risk and uncertainty to the use of these models.

So of the issues are: Who should profit from a model’s output? Should the owners of a model’s training data have a share? Can anyone own the rights at all? “I’m an optimist by nature, so I think that we will probably find a way of getting things right eventually, but only after lots of lawsuits and policy intervention,” says Andres Guadamuz, a legal scholar at the University of Sussex, in England.

Take Zarya of the Dawn, a short comic book with art from AI image generator Midjourney. After a legal back-and-forth, the United States Copyright Office ruled in February that creator Kris Kashtanova was entitled to the copyright for Zarya as a whole and the Kashtanova manually laid our arrangement of text and images. But Kashtanova was not entitled to the copyright for the images themselves. In the copyright office’s view, Midjourney didn’t offer its human users enough control over the artistic process to qualify as their own creations, and only human-created works could be copyrighted (though a later announcement left room for an AI-generated work that saw sufficient human modification after the fact).

Suppose this precedent holds. Copyright-free AI output might inundate the world and drown out traditional, human-made content. “Creators would face a lot of free-to-use content that can undermine their market,” says Giorgio Franceschelli, a Ph.D. student in computer science and engineering at the University of Bologna, in Italy, who has written extensively about AI and intellectual-property law. But on the other hand, Franceschelli says, if the people using these models can’t easily profit from AI-created works, then AI developers have less financial incentive to develop such tools. Such an future could alleviate the pressure on human creators, at the cost of stunting generative AI’s growth.

“In general, I believe that no solution is fully safe, and legislators will be asked to decide what to protect and what to sacrifice,” says Franceschelli.

The situation outside the United States is also murky. United Kingdom copyright law theoretically allows for protecting computer-generated works; European copyright law does not.

The world of patent law is starting to see similar battles play out. An AI’s mere involvement in an invention is less contentious than its involvement in a work of art, but what if AI is credited with inventing something in the first place?

The preliminary skirmish revolves about a journeyman engineer named Stephen Thaler, who has spent the last several years seeking patents for two inventions: a fractal-shaped food container and a flashing emergency beacon. Thaler claims both inventions are the output of DABUS, an AI system he designed. Thaler is seeking a patent explicitly naming an AI system as inventor.

Dodging the issue by issuing a blanket ban on AI-generated applications probably isn’t the answer.

Thaler’s team has so far failed to convince European, U.S., Australian, and New Zealand authorities, who have all denied him or decided against him in court. The United Kingdom’s Supreme Court has yet to decide as of this writing. But Thaler has won one lasting victory, in South Africa, which granted the first ever patent to an AI (for the fractal food container) in August 2021.

Thaler’s DABUS is hardly likely to be the end of the story. To some, a burst of AI-spawned inventions means a world where the patent system must deal with a widening torrent of AI-generated patent applications. Not everyone thinks this is a bad fate. “That should mean that AI is generating an overwhelming amount of innovation, which would be a good outcome,” says Ryan Abbott, a legal scholar at the University of Surrey, in England, and part of Thaler’s legal team.

Dodging the issue by issuing a blanket ban on AI-generated applications probably isn’t the answer. “If AI-made inventions are excluded from the patent system, it might stifle the innovation ecosystem,” says Toby Walsh, a computer scientist at the University of New South Wales, in Australia.

Walsh and Alexandra George, a patent law scholar at the University of New South Wales, suggested future-proofing the patent system by sorting AI-generated inventions into a category they named “AI-IP.” Patents under AI-IP would last for less time than traditional patents and possibly give a share to AI model developers or training data owners.

But, especially in a future where AI becomes ubiqutious, any categorization method likely runs against a question, one with no consensus answer: What, if anything, separates a human creation from an AI creation?

The Conversation (1)
Gerard Robinson
Gerard Robinson20 Jun, 2023
M

I definitely think it should be a combination of the owners of the training data, the developers of the model's code, and the person who gave the parameters for the object. The AI generated object would not exist without those 3 human sources participating in the creation and use of a tool. A tool cannot own anything on its own.