this post was submitted on 18 Jun 2025
617 points (98.0% liked)

Enough Musk Spam

3093 readers
115 users here now

For those that have had enough of the Elon Musk worship online.

No flaming, baiting, etc. This community is intended for those opposed to the influx of Elon Musk-related advertising online. Coming here to defend Musk or his companies will not get you banned, but it likely will result in downvotes. Please use the reporting feature if you see a rule violation.

Opinions from all sides of the political spectrum are welcome here. However, we kindly ask that off-topic political discussion be kept to a minimum, so as to focus on the goal of this sub. This community is minimally moderated, so discussion and the power of upvotes/downvotes are allowed, provided lemmy.world rules are not broken.

Post links to instances of obvious Elon Musk fanboy brigading in default subreddits, lemmy/kbin communities/instances, astroturfing from Tesla/SpaceX/etc., or any articles critical of Musk, his ideas, unrealistic promises and timelines, or the working conditions at his companies.

Tesla-specific discussion can be posted here as well as our sister community /c/RealTesla.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] ZDL@lazysoci.al 68 points 1 week ago (22 children)

That's why objective facts are always presented correctly.

Here's me looking at the hallucinated discography of a band that never existed and nodding along.

[–] Honytawk 1 points 1 week ago* (last edited 1 week ago) (11 children)

There are no objective facts about a band that never existed, that is the point.

Ask them about things that do have enough overwhelming information, and you will see it will be much more correct.

[–] ZDL@lazysoci.al 9 points 1 week ago (10 children)

But not 100%. And the things they hallucinate can be very subtle. That's the problem.

If they are asked about a band that does not exist, to be useful they should be saying "I'm sorry, I know nothing about this". Instead they MAKE UP A BAND, ITS MEMBERSHIP, ITS DISCOGRAPHY, etc. etc. etc.

But sure, let's play your game.

All of the information on Infected Rain is out there, including their lyrics. So is all of the information on Jim Thirwell's various "Foetus" projects. Including lyrics.

Yet ChatGPT, DeepSeek, and Claude will all three hallucinate tracks, or misattribute them, or hallucinate lyrics that don't exist to show parallels in the respective bands' musical themes.

So there's your objective facts, readily available, that LLMbeciles are still completely and utterly fucking useless for.

So they're useless if you ask about things that don't exist and will hallucinate them into existence on your screen.

And they're useless if you ask about things that do exist, hallucinating attributes that don't exist onto them.

They. Are. Fucking. Useless.

That people are looking at these things and saying "wow, this is so accurate" terrifies the living fuck out of me because it means I'm surrounded not by idiots, but by zombies. Literally thoughtless mobile creatures.

[–] Honytawk 0 points 1 week ago (1 children)

Sounds like you haven't tried an LLM in at least a year.

They have greatly improved since they were released. Their hallucinations have diminished to close to nothing. Maybe you should try that same question again this time. I guarantee you will not get the same result.

[–] ZDL@lazysoci.al 3 points 1 week ago (1 children)

Their hallucinations have diminished to close to nothing.

Are you sure you're not an AI, 'cause you're hallucinating something fierce right here boy-o?

Actual research, as in not "random credulous techbrodude fanboi on the Internet" says exactly the opposite: that the most recent models hallucinate more.

[–] Honytawk 1 points 1 week ago (1 children)

Only when switching to more open reasoning models with more features. With non-reasoning models the decline is steady.

https://research.aimultiple.com/ai-hallucination/

But I guess that nuance is lost on people like you who pretend AI killed their grandma and ate their dog.

[–] ZDL@lazysoci.al 2 points 1 week ago

Wow. LLM shills just really can't cope with reality can they.

Go to one of your "reasoning" models. Ask a question. Record the answer. Then, and here's the key, ask it to explain its reasoning. It churns out a pretty plausible-sounding pile of bullshit. (That's what LLMbeciles are good at, after all.) But here's the key (and this is the key that separates the critical thinker from the credulous): ask it again. Not even in a new session. Ask it again to explain its reasoning. Do this ten times. Count the number of different explanations it gives for its "reasoning". Count the number of mutually incompatible lines of "reasoning" it gives.

Then, for the piece de resistance, ask it to explain how its reasoning model works. Then ask it again. And again.

It's really not hard to spot the bullshit machine in action if you're not a credulous ignoramus.

load more comments (8 replies)
load more comments (8 replies)
load more comments (18 replies)