this post was submitted on 22 Jul 2025
19 points (100.0% liked)

SneerClub

1160 readers
28 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] scruiser@awful.systems 8 points 1 day ago* (last edited 1 day ago) (1 children)

Some of the comments are, uh, really telling:

The main effects of the sort of “AI Safety/Alignment” movement Eliezer was crucial in popularizing have been OpenAI, which Eliezer says was catastrophic, and funding for “AI Safety/Alignment” professionals, whom Eliezer believes to predominantly be dishonest grifters. This doesn't seem at all like what he or his sincere supporters thought they were trying to do.

The irony is completely lost on them.

I wasn't sure what you meant here, where two guesses are "the models/appeal in Death with Dignity are basically accurate, but, should prompt a deeper 'what went wrong with LW or MIRI's collective past thinking and decisionmaking?, '" and "the models/appeals in Death with Dignity are suspicious or wrong, and we should be halt-melting-catching-fire about the fact that Eliezer is saying them?"

The OP replies that they meant the former... the later is a better answer, Death with Dignity is kind of a big reveal of a lot of flaws with Eliezer and MIRI. To recap, Eliezer basically concluded that since he couldn't solve AI alignment, no one could, and everyone is going to die. It is like a microcosm of Eliezer's ego and approach to problem solving.

"Trigger the audience into figuring out what went wrong with MIRI's collective past thinking and decision-making" would be a strange purpose from a post written by the founder of MIRI, its key decision-maker, and a long-time proponent of secrecy in how the organization should relate to outsiders (or even how members inside the organization should relate to other members of MIRI).

Yeah, no shit secrecy is bad for scientific inquiry and open and honest reflections on failings.

...You know, if I actually believed in the whole AGI doom scenario (and bought into Eliezer's self-hype) I would be even more pissed at him and sneer even harder at him. He basically set himself up as a critical savior to mankind, one of the only people clear sighted enough to see the real dangers and most important question... and then he totally failed to deliver. Not only that he created the very hype that would trigger the creation of the unaligned AGI he promised to prevent!

[–] Soyweiser@awful.systems 8 points 14 hours ago

The irony is completely lost on them.

So not only has Yud failed to properly align AI, he also failed to align the AI aligners. Time to burn down the sequences and start over.