this post was submitted on 27 Jul 2025
24 points (96.2% liked)

SneerClub

1167 readers
36 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
24
Against truth (samkriss.substack.com)
submitted 1 day ago* (last edited 1 day ago) by 200fifty@awful.systems to c/sneerclub@awful.systems
you are viewing a single comment's thread
view the rest of the comments
[–] scruiser@awful.systems 20 points 1 day ago

Saw this posted to the Reddit Sneerclub, this essay has some excellent zingers and a good overall understanding of rationalists. A few highlights...

Rationalism is the notion that the universe is a collection of true facts, but since the human brain is an instrument for detecting lions in the undergrowth, almost everyone is helplessly confused about the world, and if you want to believe as many true things and disbelieve as many false things as possible—and of course you do—you must use various special techniques to discipline your brain into functioning more like a computer. (In practice, these techniques mostly consist of calling your prejudices ‘Bayesian priors,’ but that’s not important right now.)

We're all very familiar with this phenoma, but this author has a pithy way of summarizing it.

The story is not a case study in how rationality will help you understand the world, it’s a case study in how rationality will give you power over other people. It might have been overtly signposted as fiction, with all the necessary content warnings in place. That doesn’t mean it’s not believed. Despite being genuinely horrible, this story does have one important use: it makes sense out of the rationalist fixation on the danger of a superhuman AI. According to HPMOR, raw intelligence gives you direct power over other people; a recursively self-improving artificial general intelligence is just our name for the theoretical point where infinite intelligence transforms into infinite power.

Yep, the author nails the warped view Rationalists have about intelligence.

We’re supposedly dealing with a group of idiosyncratic weirdos, all of them trying to independently reconstruct the entirety of human knowledge from scratch. Their politics run all the way from the furthest fringes of the far right to the furthest fringes of the liberal centre.

That is a concise summary of their warped Overton Window, yeah.