Game of Thrones author sues ChatGPT owner OpenAI

Not even remotely the same. The correct analogy: someone takes all of my, say, US Fighter Projects, redraws the diagrams (perhaps as 3D full color renderings), re-writes the text in their own words and publishes the result under their own name.

Not in the slightest. This isn't even an analogy needed, because we know what this is: It's taking someone's work, without remuneration or consent, putting it into a database, and running it through a training algorithm to regurgitate it in mangled form, in order to sell it to people using mobile apps. It's plagiarism at best, intellectual property theft at worst, and one of these is grounds for a civil suit. It depends on the exact firm, their algorithm and database and how both are maintained and added to, and how they approach it with authors in particular.

I know of a couple firms that use open source and public domain works explicitly to sidestep this problem, except you get similar issues with image generating GANs using scraping of sites like pixiv (Japan's version of DeviantArt) or Twitter, again without permission.

A film studio or video game publisher has to pay royalties, or acquire a license or some other form of legal permission, to adapt someone's book to a script. AI should be no different for putting someone's work into a database. Someone should be allowed to sign a legally binding contract and be informed that their work is being included in a database. Not silently stolen from the Internet by a web spider.

The most ethical solution is to tie everything up in a mass class action lawsuit, in order to establish proper boundaries for AI firms to pay authors and publishers to use their works, if they desire this, and provide proper remuneration for reuse of their works.

If AI is in any way profitable it will be able to cover this. Just like film studios or games publishers do when they make things.

The disparity is that the AI trained on the texts is not there to, nor will it, redistribute the works. It has simply absorbed the knowledge within in ways that mimic our own memory.

No one in the lawsuit cares. The important part is that someone, a human being, or an AI web spider's corporate person legal owner, took that work without permission, without remuneration, and used it to generate a profit. That's all that matters. How the profit is generated is immaterial. Unless AI doesn't generate a profit, of course, then why are we bothering with it.

Large language models don't "learn" either, as they're statistical models. An analogy is that someone put it through a paper shredder, reprinted the mess after reassembling some legible text from the shreds, and claimed it was an original work.

That's not how copyright works, unfortunately. That's not how it should work, either. If you want to debate that, you can start publishing your own works and patents under a copyleft license or something, and make no money from it, then perhaps contribute that to your favorite AI databases. That would be pretty honest, but foolish, at least.
 
Last edited:
Not in the slightest. This isn't even an analogy needed, because we know what this is: It's taking someone's work, without remuneration or consent, putting it into a database, and running it through a training algorithm to regurgitate it in mangled form, in order to sell it to people using mobile apps. It's plagiarism at best, intellectual property theft at worst, and one of these is grounds for a civil suit.

Nope. What you describe is how *education* generally works.


A film studio or video game publisher has to pay royalties, or acquire a license or some other form of legal permission, to adapt someone's book to a script.
Nope. How many crappy early-80's movies were thinly veiled ripoffs of "Jaws," "Star Wars," "Alien," "Conan the Barbarian?" *Some* got sued, but that was because they stuck too close. But most others were *clearly* derivative, but just different enough to pass legal muster. Hell, "The Asylum" makes a fair deal of moolah cranking out brain-dead "mockbusters," and I'm not seeing them getting sued into oblivion. AI, on the other hand, can at least in principle make all-new stuff after having learned from prior art. "In the style of" is not copyrightable.

I mean... who's supposed to sue for this:

AI_art.jpeg

It won an award. It looks spiffy. It presumably "learned" from prior art. But who got ripped off?
 
Do you really need to hear it all again? Libraries in the United States were a gift to the public. The books were there to read if desired.

OpenAI has taken books, and artwork, without permission and without money compensation to give to its program to categorize, disassemble and tag so the parts could be reassembled in some form as desired by a paying customer. This was an essential process to creating a working program which will make a profit for OpenAI. Now they could have created their own story material, or artwork, but they would have had to pay someone - a large group of someones - to do this. Instead they took the cheap route and just took material under copyright and incorporated it into their program. I think prosecutors can make the case that they did, in fact, violate copyright, not inform anyone, or pay them, and they will reap a profit off the backs of creators - writers and artists - because it was sitting there, on the internet, for them to steal.

I would have had no problem with this if they had done everything entirely themselves. They chose not to, and selected the wrong way.
 
This is how a copyright appears on a web site:



Copyright 2023 ©
No text, images or any aspect of the site design are to be published in any format (book, video, web site, etc) without prior written consent from the authors of the site, its designer and from the contributors who provided information, images or other content.
 
Nope. What you describe is how *education* generally works.

Higher education is generally wasted these days (well since the 1970's, really) on mediocrities, yes. That's not really an indictment of...anything, honestly.

Large language models can easily reproduce text from their datasets verbatim anyway, which is important. It's just a matter of tweaking the algorithm's output. There's a tradeoff between "creativity" (really, just synonymous word searching and pattern matching, which results in the mangling of basic facts, and most "hallucinations" that LLM shills talk about) and actual reproducible accuracy in LLM algorithms.

For a highly accurate LISP-type machine, which is used as an expert system for diagnosis of a disease or designing a missile motor or making a new form of house or as a predictive search engine for legal texts, you would want the latter, obviously. For replacing a middle manager, or someone equally mediocre, i.e. someone who doesn't really need a university education or much literacy, you use the former.

Something like GPT-whatever's algorithm can be tuned for either output form, it will just seem more stilted at one end and more verbose at the other. No one really knows what this would actually look like, or how you would go about adjusting it, outside Open AI, because they're quite tight lipped for obvious reasons: they want to make money without paying the people they stole from and they are probably going to be considered a trade secret.

The algorithm isn't what's important, after all, so no one is really going to touch that. It's the fact that OpenAI (and other firms) harvested information from the Internet to train a machine, in order to make profit at the end of the day, without compensating people for their work (or even informing them of it).

That's the problem: people did some work for a firm and weren't compensated for it. Simple.

Just because my car is in a driveway doesn't mean you get that right to use it to make a DoorDash delivery "because I'm not using it" or "because you didn't have that idea and now I'm capitalizing on it". We call that theft.

It won an award. It looks spiffy. It presumably "learned" from prior art. But who got ripped off?

This is why class action lawsuits exist: to pursue a large amount of damages from people who can prove they have some level of involvement in those damages. A lot of people got ripped off, it's not important who specifically got ripped off, just that we can assume that people who posted art on this website at this time period are probably involved with a high level of reliability.

Because specifically identifying that would involve painstakingly disassembling the dataset the AI was trained on, which is proprietary trade information, and may not even exist anymore. It's easier to prove that OpenAI did some damage by scraping the Internet with spiders between <years>, using spiders on <website(s)>, and thus artists who posted <image> or authors who posted <text> on <website(s)> are entitled to settlement, because their artwork was (reasonably assumed) to be used in construction of database(s) by OAI.

Expect "if you were an <artist> on <Twitter> between the years <2012> and <2022>, you may be entitled to damages" in the next few years.

It means a large number of people are going to go to a law firm and submit their information, presumably their art works or whatever to prove they were on Twitter, and other things which may have been scraped by web spiders between a particular period this dataset was built.

Once the damage has been proven, OpenAI will get a judgment from the court, probably some fairly paltry sum like $5 billion or $200 million or whatever, and this will be divided amongst the people who have been able to prove they were on Twitter or Livejournal or whatever between some specified years. It will be paid out to their bank accounts once it's been collected. You know, after a decade plus of back and forth.

So you might get a check for like $200 or $20 or something in 12 years lol, because that's justice or something.

In that sense, Big Tech will probably end up like a smaller version of Big Tobacco.
 
Last edited:
Deceiving people has been standard operating procedure for a long time among a number of companies. This is nothing new. In fact, this appears to be an outgrowth of military technology to lie to the enemy. In the past, photos were manipulated by hand using air brushes. Someone with actual skill can take a photo and modify it in a way where the average person cannot tell what was changed. I have seen this first-hand. Today, this can be done using a computer program. The "old" program appears to have been sold because a newer, far superior, image manipulation program has replaced it. People should remember that the internet started as a secret way for scientists in distant parts of the country to communicate with each other quickly. It was called ARPA-Net. The Defense Advance Research Projects Agency has done it again.

In the case of so-called AI art, images are categorized, tagged and disassembled for later reassembly in some form by request of some customer. That's all it is, but because AI is stupid, some "AI" art leaves behind the original artist's signature.

And forget that tired, old Hippie term: Ripped Off. Replace it with 'I was robbed." Or "Somebody stole my work." It's called Intellectual Property Theft.

A judge can and should shut all of this down at the Federal level as soon as possible.

 
Last edited:
A judge can and should shut all of this down at the Federal level as soon as possible.

And that would stop the Japanese, Koreans, Chinese, Indians, who-the-frak-ever from continuing on, producing ever superior AI that people in the US can simply use a VPN to download and continue to use... *how* exactly?
 
And that would stop the Japanese, Koreans, Chinese, Indians, who-the-frak-ever from continuing on, producing ever superior AI that people in the US can simply use a VPN to download and continue to use... *how* exactly?

Your lack of knowledge of copyright law is obvious. The Universal Copyright Convention was established to set standards for the use of copyright work. Period.

You WANT a world you want. And it has to be your way. Do we tell the Police to go home and just let criminals be criminals because crime is forever? You want two things at the same time: a world where you get what you want, and progress which is hindered by criminal activity as opposed to copyright holders with legitimate claims.

This isn't progress. It's theft on a global scale. Are you for theft? Or are you saying: I don't care where I get AI from as long as I can get it.

Are you aware of how many actual books are printed in China? How China is called a threat by the United States but U.S. Dollars continue to enter the country because people want cheap?
 
Honestly, I want the suit to go forward, because at least then we will have a decision as to whether LLM can be considered IP theft.

I mean, this whole lawsuit could have been prevented had the developers of ChatGTP approached the various authors and asked permission.
 
Well, think about it. If some or most said no, then what? Or what if they wanted money compensation? Then OpenAI would need an army of lawyers, being paid hundreds of dollars an hour, to cut various deals with hundreds of authors. It would have drained money from the project and potentially could have stopped everything for years. It boils down to contract negotiations with authors and their agents. Again, some could have refused any deal if they thought the compensation being offered was not fair market value.
 
Well, think about it. If some or most said no, then what? Or what if they wanted money compensation? Then OpenAI would need an army of lawyers, being paid hundreds of dollars an hour, to cut various deals with hundreds of authors. It would have drained money from the project and potentially could have stopped everything for years. It boils down to contract negotiations with authors and their agents. Again, some could have refused any deal if they thought the compensation being offered was not fair market value.
Then you say "Thank you for your time, we will use someone else then," and walk away.

Seriously, you send ONE lawyer to talk to, say, Baen Books about using their library for your large language model. Then Baen can talk to their authors. Authors that don't want to can opt out. But I suspect that if you offered money, most Baen authors would accept. (The point of being an author is to be paid for doing what you like, after all.)
 
It is my opinion that OpenAI did not want to put in time and money up-front. Perhaps they felt that using someone else in some cases would not have fulfilled the requirement of adding certain quality books to their program, leaving it partially incomplete. I mean you don't build a working tank by using cheap materials.
 
And are you aware of the *vast* theft of IP that the Chinese government gives a pass to if it benefits the CCP?

Objection! Relevance?

Yes. there are servers in Ubombistan that are beyond the reach of legitimate governments. I suppose we could send in Special Forces teams to blow them up but other things are more pressing...
 
Objection! Relevance?

Yes. there are servers in Ubombistan that are beyond the reach of legitimate governments. I suppose we could send in Special Forces teams to blow them up but other things are more pressing...
As you are well aware, we're not talking about some backwater hole. We're talking about *China.* Where IP theft is how they do just about everything.
 
As you are well aware, we're not talking about some backwater hole. We're talking about *China.* Where IP theft is how they do just about everything.

Please don't take this the wrong way but 'What are you, or the U.S. government, going to do about it?' So China produces some plush figures with the Disnay label like those cheap Rollex watches sold in some cities.
 
It is my opinion that OpenAI did not want to put in time and money up-front. Perhaps they felt that using someone else in some cases would not have fulfilled the requirement of adding certain quality books to their program, leaving it partially incomplete. I mean you don't build a working tank by using cheap materials.
Exactly. And that's what's being challenged here.
 
Bluntly? YES.
Telling me what? That given a big enough country, and well armed, copyright infringement is at the bottom of the list?

Well, let's put it this way,
Going to war with China over state subsidized mass production copyright infringement would have a somewhat different
oh, let's call it "Return on Investment" than going to war with someplace like Haiti over state subsidized copyright infringement.
 
Link to this was in today's email from a science-fantasy writer/blogger I follow on WordPress,

AI tools can be concerning, if not flat out divisive - especially when abused. So I started testing, using, and examining these tools while they exploded, and now that the dust is settling. I've followed arguments on both sides for a year.

So first, some context.

Quill to Quantum: Are AI Generative Tools Creative Sidekicks or Replacing Creativity?​

I've used ChatGPT and MidJourney almost every day so you didn't have to.​


Dominic de Souza
Sep 8, 2023

 
And she also sent about another post with link to an AI art article,

Playing with AI Imagery​

October 9, 2023 Caroline Furlong

As someone who plays with AI art, I can confirm this author’s point. It is often hit-or-miss, and even when you get something nearly perfect, you often have to tweak it. Or just live with the fact that it’s not precisely what you wanted, simply close enough to work.
I will leave you to decide whether or not “good enough” is what you want, readers. Until then, take a look at the article below and see what you think.


 
Link to this was in today's email from a science-fantasy writer/blogger I follow on WordPress,



Quill to Quantum: Are AI Generative Tools Creative Sidekicks or Replacing Creativity?​

I've used ChatGPT and MidJourney almost every day so you didn't have to.​


Dominic de Souza
Sep 8, 2023


Notice: Personal opinions are not court decisions.

Second: Doing whatever the heck you want because you think you can is not a court decision.

Finally: OpenAI used copyright works without permission and without payment which either happened or it didn't.
 
Article headline from today's New York Times.


A.I. Could Soon Need as Much Electricity as an Entire Country​


Behind the scenes, the technology relies on thousands of specialized computer chips.
 
A further story concerning what is to me the most glaring issue behind the 'Procedural Generation' movement, namely taking without asking...

A prominent Australian novelist says writers face a "David and Goliath" battle to regain ownership of their work after thousands of books were "stolen" by a Silicon Valley-based AI aggregator.

Esperance author Fleur McDonald has written 21 books that have gone on to sell more than 750,000 copies.

But those decades of work were quickly nullified after she learned more than half of her books had been uploaded to an AI training program without her consent.

https://www.msn.com/en-au/money/mar...e-to-build-ai-linguistic-software/ar-AA1i0EsC
 
Esperance author Fleur McDonald has written 21 books that have gone on to sell more than 750,000 copies.

But those decades of work were quickly nullified after she learned more than half of her books had been uploaded to an AI training program without her consent.


Nowhere in the article does it explain how her work was "nullified."
 
Esperance author Fleur McDonald has written 21 books that have gone on to sell more than 750,000 copies.

But those decades of work were quickly nullified after she learned more than half of her books had been uploaded to an AI training program without her consent.


Nowhere in the article does it explain how her work was "nullified."

Oh please. You desperately want to keep the blinders on, don't you? Once any author's books are copied, without permission, and/or end up in the wilds, also called the internet, why buy them? Why? The price anyone can afford is FREE. Do you get that? Do you?

By the way, I sent out a copyright infringement notice a few days ago. Some guy decided it was OK to sell some out of print books of ours as a digital file - cheap.
 
Once any author's books are copied, without permission, and/or end up in the wilds, also called the internet, why buy them? Why? The price anyone can afford is FREE. Do you get that? Do you?
Can it with the feigned outrage and actually *think.* Where did the article say that the AI released the copies to anyone else? The article said that the AI used the books to learn, it didn't say that the AI posted the actual text of the books online. Doing the latter is of course bad. Doing the *former* hurts the author none at all, except to eventually produce competition.
 
Can it with the feigned outrage and actually *think.* Where did the article say that the AI released the copies to anyone else? The article said that the AI used the books to learn, it didn't say that the AI posted the actual text of the books online. Doing the latter is of course bad. Doing the *former* hurts the author none at all, except to eventually produce competition.

Copyright infringement? Have you heard of it? My company has dealt with infringers.

NOT actual text from the OpenAI trial:

"Your honor. The people can show that the theft of authors' works was critical to completing the ChatGPT Program. And that OpenAI maliciously violated existing copyright law by doing so. Put another way, the ChatGPT Program would not have been viable without the authors' work. The people are also suing OpenAI for wrongful enrichment."

You wanna make a buck off the backs of authors? Do it the right way: By AGREEMENT. By getting PERMISSION, FIRST. Not, 'we's gonna steal stuff offa all dees autors and if they raise a fuss, we'll take care of it in court.'

------------------------------------------------------------------------------------

Take Adobe. They have found another way to get things done.

https://techcrunch.com/2023/06/26/a...kSwzpCiuh-lRavleB_0Ggyoa_Y_Voe5X6OPqNxXb8ISjY


 
Last edited:
Can it with the feigned outrage and actually *think.* Where did the article say that the AI released the copies to anyone else? The article said that the AI used the books to learn, it didn't say that the AI posted the actual text of the books online. Doing the latter is of course bad. Doing the *former* hurts the author none at all, except to eventually produce competition.
If the AI were a human, the books would have been purchased for the AI to read.

But the books were not purchased.
 
Do we know that?
Other authors whose works have been listed as being used to teach the AI have not received either requests to use or compensation for it, so I doubt that GRRM did. Also, several of the works listed were found on a copyright-violating "free library" operating without the permission of the authors.

How the company acquired the texts of those books is going to be a rather key item of the trial.
 
Other authors whose works have been listed as being used to teach the AI have not received either requests to use or compensation for it, so I doubt that GRRM did. Also, several of the works listed were found on a copyright-violating "free library" operating without the permission of the authors.

How the company acquired the texts of those books is going to be a rather key item of the trial.
For all the article says, the AI company bought the Kindle versions off Amazon... or borrowed them from the local library. No evidence in any direction is given in the article, merely that they'd used the text.
 
For all the article says, the AI company bought the Kindle versions off Amazon... or borrowed them from the local library. No evidence in any direction is given in the article, merely that they'd used the text.
Right. Hence the question of how exactly OpenAI got the texts being key to the case. If they didn't buy the texts from some legit source (and therefore cannot provide a receipt!), OpenAI is guilty of unjust enrichment from the authors whose texts were used.
 
This guys story may be informative. In short, hes working on a 3D animated movie, and as a way to keep his subscribers/funders/whatever interested while the project is ongoing, created a few shorts and audio books and the like... using AI to accomplish that. Anti-AI bigots flipped out: the basic complaint was that if he's not rich enough to pay for voice actors and artists, he should not try to create new stuff. Art, it seems, is only for the wealthy to dabble in. Are you poor? Then do not use the tools that are available to you to help you share your ideas with the world.

View: https://www.youtube.com/watch?v=iRSg6gjOOWA
 

Similar threads

Please donate to support the forum.

Back
Top Bottom