this post was submitted on 05 Jun 2025
971 points (98.8% liked)

Not The Onion

16576 readers
1091 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
(page 4) 42 comments
sorted by: hot top controversial new old
[–] LovableSidekick@lemmy.world 5 points 2 days ago* (last edited 2 days ago) (6 children)

But meth is only for Saturdays. Or Tuesdays. Or days with "y" in them.

[–] GreenKnight23@lemmy.world 3 points 2 days ago

everyday is meythday if you're spun out enough.

load more comments (5 replies)
[–] pastermil@sh.itjust.works 2 points 2 days ago
[–] thirdBreakfast@lemmy.world 3 points 2 days ago

> afterallwhynot.jpg

[–] pixxelkick@lemmy.world 4 points 2 days ago* (last edited 2 days ago) (4 children)

Anytime an article posts shit like this but neglects to include the full context, it reminds me how bad journalism is today if you can even call it that

If I try, not even that hard, I can get gpt to state Hitler was a cool guy and was doing the right thing.

ChatGPT isn't anything in specific other than a token predictor, you can literally make it say anything you want if you know how, it's not hard.

So if you wrote an article about how "gpt said this" or "gpt said that" you better include the full context or I'll assume you are 100% bullshit

[–] gwildors_gill_slits@lemmy.ca 2 points 2 days ago

You're not wrong but also there's a ton of misinformation out there, both due to bad journalism and also pro-LLM advocates, that is selling the idea that LLMs are actually real AI that is able to think and reason and is operating within ethical boundaries of some kind.

Neither of those things are true but that's what a lot of available information about LLMs would have you believe so it's not difficult to imagine someone engaging with a chatbot ending up with a similar result without trying to force it explicitly via prompt engineering.

load more comments (3 replies)
[–] kbal@fedia.io 3 points 2 days ago (1 children)

This slightly diminishes my fears about the dangers of AI. If they're obviously wrong a lot of the time, in the long run they'll do less damage than they could by being subtly wrong and slightly biased most of the time.

[–] TachyonTele@lemm.ee 7 points 2 days ago* (last edited 2 days ago) (1 children)

The problem is there are morons that do what these spicy text predictors spit out at them.

[–] kbal@fedia.io 2 points 2 days ago (1 children)

I'm mean sure they'll still kill a few people along the way, but they're not going to contribute as much to the downfall of all civilization as they might if they weren't constantly revealing their utter mindlessness. Even as it is smart people can be fooled, at least temporarily, into thinking that LLMs understand things and are reliable partners in life.

[–] TachyonTele@lemm.ee 1 points 2 days ago

I agree with you there.

load more comments
view more: ‹ prev next ›