this post was submitted on 14 Jul 2025
19 points (100.0% liked)

TechTakes

2075 readers
51 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] TinyTimmyTokyo@awful.systems 7 points 17 hours ago (2 children)

Oof, that Hollywood guest (Brian Koppelman) is a dunderhead. "These AI layoffs actually make sense because of complexity theory". "You gotta take Eliezer Yudkowsky seriously. He predicted everything perfectly."

I looked up his background, and it turns out he's the guy behind the TV show "Billions". That immediately made him make sense to me. The show attempts to lionize billionaires and is ultimately undermined not just by its offensive premise but by the world's most block-headed and cringe-inducing dialog.

Terrible choice of guest, Ed.

[–] lagrangeinterpolator@awful.systems 9 points 16 hours ago (2 children)

I study complexity theory and I'd like to know what circuit lower bound assumption he uses to prove that the AI layoffs make sense. Seriously, it is sad that the people in the VC techbro sphere are thought to have technical competence. At the same time, they do their best to erode scientific institutions.

[–] BigMuffN69@awful.systems 1 points 5 minutes ago

My hot take has always been that current Boolean-SAT/MIP solvers are probably pretty close to theoretical optimality for problems that are interesting to humans & AI no matter how "intelligent" will struggle to meaningfully improve them. Ofc I doubt that Mr. Hollywood (or Yud for that matter) has actually spent enough time with classical optimization lore to understand this. Computer go FOOM ofc.

[–] Soyweiser@awful.systems 3 points 10 hours ago (1 children)

Only way I can make the link between complexity theory and laying off people is thinking about putting people in 'can solve up to this level of problem' style complexity classes (which regulars here should realize gets iffy fast). So hope he explained it more than that.

[–] BlueMonday1984@awful.systems 4 points 10 hours ago (1 children)

The only complexity theory I know of is the one which tries to work out how resource-intensive certain problems are for computers, so this whole thing sounds iffy right from the get-go.

[–] Soyweiser@awful.systems 3 points 8 hours ago (1 children)

Yeah but those resource-intensive problems can be fitted into specific classes of problems (P, NP, PSPACE etc), which is what I was talking about, so we are talking about the same thing.

So under my imagined theory you can classify people as 'can solve: [ P, NP, PSPACE, ... ]'. Wonder what they will do with the P class. (Wait, what did Yarvin want to do with them again?)

[–] lagrangeinterpolator@awful.systems 3 points 5 hours ago* (last edited 5 hours ago) (1 children)

There's really no good way to make any statements about what problems LLMs can solve in terms of complexity theory. To this day, LLMs, even the newfangled "reasoning" models, have not demonstrated that they can reliably solve computational problems in the first place. For example, LLMs cannot reliably make legal moves in chess and cannot reliably solve puzzles even when given the algorithm. LLM hypesters are in no position to make any claims about complexity theory.

Even if we have AIs that can reliably solve computational tasks (or, you know, just use computers properly), it still doesn't change anything in terms of complexity theory, because complexity theory concerns itself with all possible algorithms, and any AI is just another algorithm in the end. If P != NP, it doesn't matter how "intelligent" your AI is, it's not solving NP-hard problems in polynomial time. And if some particularly bold hypester wants to claim that AI can efficiently solve all problems in NP, let's just say that extraordinary claims require extraordinary evidence.

Koppelman is only saying "complexity theory" because he likes dropping buzzwords that sound good and doesn't realize that some of them have actual meanings.

[–] o7___o7@awful.systems 3 points 2 hours ago

I heard him say "quantum" and immediately came here looking for fresh-baked sneers

[–] BurgersMcSlopshot@awful.systems 4 points 16 hours ago

Yeah, that guy was a real piece of work, and if I had actually bothered to watch The Bear before, I would stop doing so in favor of sending ChatGPT a video of me yelling in my kitchen and ask it if what is depicted was the plot of the latest episode.