this post was submitted on 13 Jun 2025
87 points (100.0% liked)
SneerClub
1122 readers
69 users here now
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
See our twin at Reddit
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's just depressing. I don't even think Yudkoswsky is being cynical here, but expressing genuine and partially justified anger, while also being very wrong and filtering the event through his personal brainrot. This would be a reasonable statement to make if I believed in just one or two of the implausible things he believes in.
He's absolutely wrong in thinking the LLM "knew enough about humans" to know anything at all. His "alignment" angle is also a really bad way of talking about the harm that language model chatbot tech is capable of doing, though he's correct in saying the ethics of language models aren't a self-solving issue, even though he expresses it in critihype-laden terms.
Not that I like "handing it" to Eliezer Yudkowsky, but he's correct to be upset about a guy dying because of an unhealthy LLM obsession. Rhetorically, this isn't that far from this forum's reaction to children committing suicide because of Character.AI, just that most people on awful.systems have a more realistic conception of the capabilities and limitations of AI technology.
the subtext is always that he also says that knows how to solve it and throw money at cfar pleaseeee or basilisk will torture your vending machine business for seven quintillion years