Legal challenges to the development and use of generative AI are accumulating. Developers are confronting potential legal minefields involving privacy, cybersecurity and defamation.
MARY LOUISE KELLY, HOST:
The world’s leading artificial intelligence company is under legal assault. OpenAI, maker of ChatGPT, was most recently sued by The New York Times for copyright infringement, but it’s also facing suits from book authors, artists, music labels and others. NPR tech correspondent Bobby Allyn is here to explain. Hey there.
BOBBY ALLYN, BYLINE: Hey, Mary Louise.
KELLY: What are all these lawsuits against the owner of ChatGPT – what are they all about?
ALLYN: Yeah. You know, they all come down to data and how this data is being used. OpenAI, in developing popular tools like ChatGPT and the image creator Dall-E, crawled the entire internet for text and images and other material. And from that enormous bucket of data, these AI tools learn and are able to create something new.
The big legal question now is, did the company break the law by scraping up parts of the internet that contained copyrighted material? And if they did this, you know, Mary Louise, with a licensing agreement, then it would be fine. But instead, OpenAI did it without permission and without payment. I talked to Ed Klaris about this. He’s a former general counsel at The New Yorker, and he’s an expert in intellectual property law.
ED KLARIS: You can’t just go and steal huge archives and then say, it’s too hard for me to get rid of it. It seems kind of contrary to the rule of law.
KELLY: Contrary to the rule of law. OK, what is OpenAI saying to defend itself?
ALLYN: Yeah. Executives there have long pointed to something in the law known as fair use doctrine, and it says that if copyrighted material is quoted or used in some way in news reporting, research, criticism, you don’t need to ask for permission or pay anyone.
But, you know, there are two tests of fair…
Read the full article here