this post was submitted on 26 Jun 2025
152 points (98.1% liked)

Technology

71922 readers
3847 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Link without the paywall

https://archive.ph/OgKUM

you are viewing a single comment's thread
view the rest of the comments
[–] mienshao@lemm.ee 77 points 1 day ago (2 children)

American law has become a literal fucking joke (IAAL). I could’ve guessed the could get the outcome of this case without any facts: the huge corporation wins over authors. American law is no longer capable of holding major corporations to account, so we need a new legal system—one that’s actually functional.

[–] MCasq_qsaCJ_234@lemmy.zip 17 points 1 day ago (1 children)

Do you want a new constitution in the United States?

[–] DarkDarkHouse@lemmy.sdf.org 19 points 1 day ago

Could start with a guillotine for corporations and see how that goes.

[–] drmoose@lemmy.world 0 points 1 day ago (2 children)

But the actual process of an AI system distilling from thousands of written works to be able to produce its own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative,” Alsup wrote.

Thats the actual argument and the judge is right here. LLMs are transformative in every sense of the word. The technology is even called "transformers".

[–] Leesi@lemmy.blahaj.zone 10 points 1 day ago* (last edited 1 day ago) (1 children)

Fallacious argument.

Something that can't generate wine glass full to the brim without a band-aid fix is far from "transformative." Even if it were:

Only the owner of copyright in a work has the right to prepare, or to authorize someone else to create, a new version of that work.

More like obfuscated plagiarism.

[–] drmoose@lemmy.world 1 points 23 hours ago (1 children)

Nope I'm literally a data programmer working in this field. Any sufficiently transformed data even coming from hard copyright is transformative work and currently LLMs meet this criteria and will continue to do so. Wanna bet?

[–] LwL@lemmy.world 2 points 21 hours ago* (last edited 21 hours ago) (1 children)

I think there's a blurry line here where you can easily train an LLM to just regurgitate the source material by overfitting, and at what point is it "transformative enough"? I think there's little doubt that current flagship models usually are transformative enough, but that doesn't apply to everything using the same technology - even though this case will be used as precedence for all of that.

There's also another issue in that while safeguards are generally in place, without them llms would be very capable of quoting entire pages at least of popular books. And jailbreaking llms isn't exactly unheard of. They also at least used to really like just verbatim repeating news articles on obscure topics.

What I'm mainly getting at is that LLMs can be transformative, but they also can plagiarize. Much like any human could. The question is then, if training LLMs on copyrighted data is allowed, will the company be held accountable when their LLM does plagiarize, the same way a person would be? Or would the better decision be to prohibit training on copyrighted data because actually transforming it meaningfully can not be guaranteed, and copyright holders actually finding these violations is very hard?

Though idk the case details, if the argument was purely focused on using the material to produce the model, rather than including the ultimate step of outputting text to anyone who asks, it was probably doomed to fail from the start and the decision makes perfect sense. And that doesn't seem too unlikely to have happened because realizing this would require the lawyer making the case to actually understand what training an LLM does.

[–] Natanael@infosec.pub 2 points 15 hours ago

This case didn't cover the copyright status of outputs. The ruling so far is just about the process of training itself.

IMHO the generative ML companies should be required to build a process tracking the influence of distinct samples on the outputs, and inform users of potential licensing status

Division of liability / licensing responsibility should depend on who contributes what to the prompt / generation. The less it takes for the user to trigger the model to generate an output clearly derived from a protected work, the more liability lies on the model operator. If the user couldn't have known, they shouldn't be liable. If the user deliberately used jailbreaks, etc, the user is clearly liable.

But you get a weird edge case when users unknowingly copy prompts containing jailbreaks, though

https://infosec.pub/comment/16682120

[–] actionjbone@sh.itjust.works 5 points 1 day ago

Yeah, well, I could call my dick the Magnum Opus but that wouldn't make it two feet long.