this post was submitted on 14 Jul 2025
19 points (100.0% liked)

TechTakes

2076 readers
39 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] nfultz@awful.systems 2 points 2 hours ago* (last edited 2 hours ago)

Nikhil's guest post at Zitron just went up - https://www.wheresyoured.at/the-remarkable-incompetence-at-the-heart-of-tech/

EDIT: the intro was strong enough I threw in $7. Second half is just as good.

[–] TinyTimmyTokyo@awful.systems 9 points 13 hours ago (1 children)
[–] Architeuthis@awful.systems 6 points 11 hours ago* (last edited 11 hours ago)

also here https://awful.systems/post/4995759

The long and short of it is motherjones discovered TPOs openly nazi alt.

[–] nfultz@awful.systems 10 points 1 day ago (1 children)

https://www.profgalloway.com/ice-age/ Good post until I hit the below:

Instead of militarizing immigration enforcement, we should be investing against the real challenge: AI. The World Economic Forum says 9 million jobs globally may be displaced in the next five years. Anthropic’s CEO warns AI could eliminate half of all entry-level white-collar jobs. Imagine the population of Greece storming the shores of America and taking jobs (even jobs Americans actually want), as they’re willing to work 24/7 for free. You’ve already met them. Their names are GPT, Claude, and Gemini.

Having a hard time imagining 300 but AI myself, Scott. Could we like, not shoehorn AI into every other discussion?

[–] Soyweiser@awful.systems 8 points 22 hours ago* (last edited 22 hours ago) (3 children)

Iirc Galloway was a pro cryptocurrency guy. So this tracks

E: imagine if the 3d printer people had the hype machine behind them like this. 'China better watch out, soon all manufacturing of products will be done by people at home'. Meanwhile China: [Laughs in 大跃进].

[–] mlen@awful.systems 1 points 1 hour ago

I think that 3D printing never picked up, because it's one of those things that empower the people, i.e. to repair stuff or build their own things, so the number of opportunities to grift seems to be smaller (although I'm probably underestimating it).

Most of the recently hyped technologies had goals that were exact opposites of empowering the masses.

[–] nfultz@awful.systems 2 points 3 hours ago

I liked his stuff on wework back in the day. Funny how he could see one tech grift really clearly and fall for another. Then again, WeWork is in the black these days. Anyway I think Galloway pivoted (apologies) to Mens Rights lately; and he also gave some money to UCLA Extension (ie not the main campus) which is a bit hard to interpret.

[–] fullsquare@awful.systems 4 points 21 hours ago (1 children)

yeah lol ez just 3dprint polypropylene polymerization reactor. what the fuck is hastelloy?

[–] Soyweiser@awful.systems 7 points 18 hours ago (1 children)

Yeah, but we never got that massive hype cycle for 3d printers. Which in a way is a bit odd, as it could have happend. Nanomachine! Star trek replicators! (Getting a bit offtopic from Galloway being a cryptobro).

[–] scruiser@awful.systems 2 points 2 hours ago

I can imagine it clear... a chart showing minimum feature size decreasing over time (using cherry picked data points) with a dotted line projection of when 3d printers would get down nanotech scale. 3d printer related companies would warn of dangers of future nanotech and ask for legislation regulating it (with the language of the legislation completely failing to effect current 3d printing technology). Everyone would be buying 3d printers at home, and lots of shitty startups would be selling crappy 3d printed junk.

[–] gerikson@awful.systems 13 points 1 day ago* (last edited 1 day ago) (2 children)

Here's an example of normal people using Bayes correctly (rationally assigning probabilities and acting on them) while rats Just Don't Get Why Normies Don't Freak Out:

For quite a while, I've been quite confused why (sweet nonexistent God, whyyyyy) so many people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being… at best lukewarm against the idea of humanity going extinct.

(Dude then goes on to try to game-theorize this, I didn't bother to poke holes in it)

The thing is, genocides have happened, and people around the world are perfectly happy to advocate for it in diverse situations. Probability wise, the risk of genocide somewhere is very close to 1, while the risk of "omnicide" is much closer to zero. If you want to advocate for eliminating something, working to eliminating the risk of genocide is much more rational than working to eliminate the risk of everyone dying.

At least on commenter gets it:

Most people distinguish between intentional acts and shit that happens.

(source)

Edit never read the comments (again). The commenter referenced above obviously didn't feel like a pithy one liner adhered to the LW ethos, and instead added an addendum wondering why people were more upset about police brutality killing people than traffic fatalities. Nice "save", dipshit.

[–] lagrangeinterpolator@awful.systems 11 points 1 day ago (2 children)

Hmm, should I be more worried and outraged about genocides that are happening at this very moment, or some imaginary scifi scenario dreamed up by people who really like drawing charts?

One of the ways the rationalists try to rebut this is through the idiotic dust specks argument. Deep down, they want to smuggle in the argument that their fanciful scenarios are actually far more important than real life issues, because what if their scenarios are just so bad that their weight overcomes the low probability that they occur?

(I don't know much philosophy, so I am curious about philosophical counterarguments to this. Mathematically, I can say that the more they add scifi nonsense to their scenarios, the more that reduces the probability that they occur.)

You know, I hadn't actually connected the dots before, but the dust speck argument is basically yet another ostensibly-secular reformulation of Pascal's wager. Only instead of Heaven being infinitely good if you convert there's some infinitely bad thing that happens if you don't do whatever Eliezer asks of you.

[–] fullsquare@awful.systems 9 points 1 day ago (1 children)

reverse dust specks: how many LWers would we need to permanently deprive of access to internet to see rationalist discourse dying out?

[–] swlabr@awful.systems 6 points 1 day ago (1 children)

What’s your P(that question has been asked at a US three letter agency)

[–] fullsquare@awful.systems 9 points 1 day ago

it either was, or wasn't, so 50%

[–] Soyweiser@awful.systems 10 points 1 day ago (3 children)

Recently, I've realized that there is a decent explanation for why so many people believe that - if we model them as operating under a strict zero-sum game model of the world… ‘everyone loses’ is basically an incoherent statement - as a best approximation it would either denote no change and therefore be morally neutral, or an equal outcome, and would therefore be preferable to some.

Yes, this is why people think that. This is a normal thought to think others have.

[–] bitofhope@awful.systems 7 points 20 hours ago

Here's my unified theory of human psychology, based on the assumption most people believe in the Tooth Fairy and absolutely no other unstated bizarre and incorrect assumptions no siree!

[–] o7___o7@awful.systems 12 points 1 day ago (1 children)

Why do these guys all sound like deathnote, but stupid?

[–] dgerard@awful.systems 12 points 1 day ago

because they cribbed their ideas from deathnote, and they're stupid

[–] zogwarg@awful.systems 7 points 1 day ago (1 children)

I mean if you want to be exceedingly generous (I sadly have my moments), this is actually remarkably close to the "intentional acts" and "shit happens" distinction, in a perverse Rationalist way. ^^

[–] Soyweiser@awful.systems 4 points 22 hours ago

Thats fair, if you want to be generous, if you are not going to be Id say there are still conceptually large differences between the quote and "shit happens". But yes, you are right. If only they had listened to Scott when he said "talk less like robots"

[–] Soyweiser@awful.systems 10 points 1 day ago (3 children)

Somebody found a relevant reddit post:

Dr. Casey Fiesler ‪@cfiesler.bsky.social‬ (who has clippy earrings in a video!) writes: " This is fascinating: reddit link

Someone “worked on a book with ChatGPT” for weeks and then sought help on Reddit when they couldn’t download the file. Redditors helped them realized ChatGPT had just been roleplaying/lying and there was no file/book…"

[–] blakestacey@awful.systems 17 points 1 day ago

After understanding a lot of things it’s clear that it didn’t. And it fooled me for two weeks.

I have learned my lesson and now I am using it to generate one page at a time.

qu1j0t3 replies:

that's, uh, not really the ideal takeaway from this lesson

[–] ebu@awful.systems 9 points 1 day ago* (last edited 22 hours ago) (1 children)

you have to scroll through the person's comments to find it, but it does look they did author the body of the text and uploaded it as a docx into ChatGPT. so points for actually creating something unlike the AI bros

it looks like they tried to use ChatGPT to improve narration. to what degree the token smusher has decided to rewrite their work in the smooth, recycled plastic feel we've all come to know and despise remains unknown

they did say they are trying to get it to generate illustrations for all 700 pages, and moreover appear[ed] to believe it can "work in the background" on individual chapters with no prompting. they do seem to have been educated on the folly of expecting this to work, but as blakestacey's other reply pointed out, they appear to now be just manually prompting one page at a time. godspeed

[–] Soyweiser@awful.systems 5 points 18 hours ago

They now deleted their post and I assume a lot of others, but they also claim they have no time to really write and just wanted a collection of stories for their kid(s). Which doesnt make sense, creating 700 pages of kids stories is a lot of work, even if you let a bot improve the flow. Unless they just stole a book of children's stories from somewhere. (I know these books exist, as a child from one of my brothers tricked me into reading two stories from one).

[–] fullsquare@awful.systems 7 points 1 day ago

looks like there's either downvote brigade keeping critical comments at +1 or 0, or reddit brigading countermeasures went on in defense of wittle promprfondler

[–] BlueMonday1984@awful.systems 6 points 1 day ago

New post from Matthew Hughes: People Are The Point, effectively a manifesto against gen-AI as a concept.

load more comments
view more: next ›