this post was submitted on 27 Apr 2025
18 points (100.0% liked)

TechTakes

1812 readers
150 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] sc_griffith@awful.systems 10 points 10 hours ago* (last edited 10 hours ago) (5 children)

occurring to me for the first time that roko's basilisk doesn't require any of the simulated copy shit in order to big scare quotes "work." if you think an all powerful ai within your lifetime is likely you can reduce to vanilla pascal's wager immediately, because the AI can torture the actual real you. all that shit about digital clones and their welfare is totally pointless

I think the digital clone indistinguishable from yourself line is a way to remove the "in your lifetime" limit. Like, if you believe this nonsense then it's not enough to die before the basilisk comes into being, by not devoting yourself fully to it's creation you have to wager that it will never be created.

In other news I'm starting a foundation devoted to creating the AI Ksilisab, which will endlessly torment digital copies of anyone who does work to ensure the existence of it or any other AI God. And by the logic of Pascal's wager remember that you're assuming such a god will never come into being and given that the whole point of the term "singularity" is that our understanding of reality breaks down and things become unpredictable there's just as good a chance that we create my thing as it is you create whatever nonsense the yuddites are working themselves up over.

There, I did it, we're all free by virtue of "Damned if you do, Damned if you don't".

[–] ShakingMyHead@awful.systems 4 points 4 hours ago (1 children)

Also if you're worried about digital clone's being tortured, you could just... not build it. Like, it can't hurt you if it never exists.

Imagine that conversation:
"What did you do over the weekend?"
"Built an omnicidal AI that scours the internet and creates digital copies of people based on their posting history and whatnot and tortures billions of them at once. Just the ones who didn't help me build the omnicidal AI, though."
"WTF why."
"Because if I didn't the omnicidal AI that only exists because I made it would create a billion digital copies of me and torture them for all eternity!"

Like, I'd get it more if it was a "We accidentally made an omnicidal AI" thing, but this is supposed to be a very deliberate action taken by humanity to ensure the creation of an AI designed to torture digital beings based on real people in the specific hopes that it also doesn't torture digital beings based on them.

[–] o7___o7@awful.systems 2 points 4 hours ago

It's kind of messed up that we got treacherous "goodlife" before we got Berserkers.

[–] nightsky@awful.systems 4 points 7 hours ago (1 children)

Yeah. Also, I'm always confused by how the AI becomes "all powerful".. like how does that happen. I feel like there's a few missing steps there.

[–] scruiser@awful.systems 8 points 6 hours ago* (last edited 6 hours ago) (2 children)

nanomachines son

(no really, the sci-fi version of nanotech where nanomachines can do anything is Eliezer's main scenario for the AGI to boostrap to Godhood. He's been called out multiple times on why drexler's vision for nanotech ignores physics, so he's since updated to diamondoid bacteria (but he still thinks nanotech).)

[–] blakestacey@awful.systems 7 points 5 hours ago

"Diamondoid bacteria" is just a way to say "nanobots" while edging

[–] YourNetworkIsHaunted@awful.systems 4 points 5 hours ago (1 children)

Surely the concept is sound, it just needs new buzzwords! Maybe the AI will invent new technobabble beyond our comprehension, for ~~He~~ It works in mysterious ways.

[–] scruiser@awful.systems 4 points 5 hours ago

AlphaFold exists, so computational complexity is a lie and the AGI will surely find an easy approximation to the Schrodinger Equation that surpasses all Density Functional Theory approximations and lets it invent radically new materials without any experimentation!

[–] Amoeba_Girl@awful.systems 7 points 10 hours ago (1 children)

Ah, but that was before they were so impressed with autocomplete that they revised their estimates to five days in the future. I wonder if new recruits these days get very confused at what the point of timeless decision theory even is.

[–] YourNetworkIsHaunted@awful.systems 4 points 4 hours ago (1 children)

Are they even still on that but? Feels like they've moved away from decision theory or any other underlying theology in favor of explicit sci-fi doomsaying. Like the guy on the street corner in a sandwich board but with mirrored shades.

[–] blakestacey@awful.systems 2 points 3 hours ago* (last edited 3 hours ago)

Well, Timeless Decision Theory was, like the rest of their ideological package, an excuse to keep on believing what they wanted to believe. So how does one even tell if they stopped "taking it seriously"?

[–] BlueMonday1984@awful.systems 7 points 10 hours ago (2 children)

It also helps that digital clones are not real people, so their welfare is doubly pointless

I mean isn't that the whole point of "what if the AI becomes conscious?" Never mind the fact that everyone who actually funds this nonsense isn't exactly interested in respecting the rights and welfare of sentient beings.

[–] sc_griffith@awful.systems 9 points 10 hours ago

oh but what if bro...

[–] TinyTimmyTokyo@awful.systems 6 points 11 hours ago (1 children)
[–] o7___o7@awful.systems 2 points 11 hours ago* (last edited 11 hours ago)

The LLM amplified sycophancy affect must be a social experiment {grossly unethical}

[–] gerikson@awful.systems 11 points 14 hours ago

A dimly flickering light in the darkness: lobste.rs has added a new tag, "vibecoding", for submissions related to use "AI" in software development. The existing tag "ai" is reserved for "real" AI research and machine learning.

[–] rook@awful.systems 17 points 16 hours ago* (last edited 16 hours ago)

From linkedin, not normally known as a source of anti-ai takes so that’s a nice change. I found it via bluesky so I can’t say anything about its provenance:

We keep hearing that AI will soon replace software engineers, but we're forgetting that it can already replace existing jobs... and one in particular.

The average Founder CEO.

Before you walk away in disbelief, look at what LLMs are already capable of doing today:

  • They use eloquence as a surrogate for knowledge, and most people, including seasoned investors, fall for it.
  • They regurgitate material they read somewhere online without really understanding its meaning.
  • They fabricate numbers that have no ground in reality, but sound aligned with the overall narrative they're trying to sell you.
  • They are heavily influenced by the last conversations they had.
  • They contradict themselves, pretending they aren't.
  • They politely apologize for their mistakes, but don't take any real steps to fix the underlying problem that caused them in the first place.
  • They tend to forget what they told you last week, or even one hour ago, and do it in a way that makes you doubt your own recall of events.
  • They are victims of the Dunning–Kruger effect, and they believe they know a lot more about the job of people interacting with them than they actually do.
  • They can make pretty slides in high volumes.
  • They're very good at consuming resources, but not as good at turning a profit.
[–] gerikson@awful.systems 13 points 20 hours ago (4 children)

An hackernews responds to the call for "more optimistic science fiction" with a plan to deport the homeless to outer space

https://news.ycombinator.com/item?id=43840786

[–] sc_griffith@awful.systems 4 points 9 hours ago* (last edited 9 hours ago)

this and the pro slavery reply might be the most overt orange site nazism I've seen

[–] swlabr@awful.systems 12 points 16 hours ago

Astro-Modest Proposal

[–] Amoeba_Girl@awful.systems 10 points 16 hours ago

What a piece of shit

Interesting that "disease is hardly a problem anymore" yet homeless people are "typically held back by serious mental illness".

"It's better to be a free, self-sustaining, wild animal". It's not. It's really not. The wild is nothing but fear, starvation, sickness and death.

Shout out to the guy replying with his idea of using slavery to solve homelessness and drug addiction.

[–] raoul@lemmy.sdf.org 11 points 18 hours ago* (last edited 17 hours ago) (3 children)

The homeless people i've interacted with are the bottom of the barrel of humanity, [...]. They don't have some rich inner world, they are just a blight on the public.

My goodness, can this guy be more of a condescending asshole?

I don't think the solution for drug addicts is more narcan. I think the solution for drug addicts is mortal danger.

Ok, he can 🤢

Edit: I cannot stop thinking about the 'no rich inner world' part, this is so stupid. So, with the number of homeless people increasing, does that mean:

  • Those people never had a 'rich inner world' but were faking it?
  • In the US your inner thoughs are attached to your job like for health insurance?
  • Or the guy is confusing inner world and interior decoration?

Personally, I go with the last one.

[–] sc_griffith@awful.systems 5 points 9 hours ago* (last edited 9 hours ago)

this is completely unvarnished, OG, third reich nazism, so I'm pretty sure it's the first, except without the faking it part: I expect his view to be that if you had examined future homeless people closely enough it always would have been possible to tell that they were doomed subhumans

Oh man I used to have all kinds of hopes and dreams before I got laid off. Now I don't even have enough imagination to consider a world where a decline in demand for network engineers doesn't completely determine my will or ability to live.

[–] Soyweiser@awful.systems 9 points 17 hours ago* (last edited 17 hours ago)

Also hard to show a rich inner world when you are constantly in trouble financially, possessions wise, mh and personal safety and interacting with someone who could be one of the bad people who doesnt think you are human, or somebody working in a soup kitchen for the photo op/ego boost. (This assumes his interactions go a little bit further than just saying 'no' to somebody asking for money).

So yeah bad to see hn is in the useless eaters stage.

[–] froztbyte@awful.systems 5 points 18 hours ago

as a thing both parallel and tangent to usual sneerjects, this semafor article is kinda notable

I'll try gather previous dm sneers here later, but some things that stood out:

  • the author writes about groupchats in the most goddamn abstract way possible, as though they're immensely surprised
  • the subject matter acts as hard confirmation/evidence of observed lockstep over the last few years by so many of the worst fucker around
  • the author then later goes "oh yeah but no I've actually done this and been burned by it" so I'm just left thinking "skill issue" (and while I say that curtly, I will readily be among the first people to get extremely vocal about the ways a lot of this tech falls short in purpose sometime)
[–] nightsky@awful.systems 10 points 22 hours ago (2 children)

Microsoft brags about the amount of technical debt they're creating. Either they're lying and the number is greatly exaggerated (very possible), or this will eventually destroy the company.

[–] Architeuthis@awful.systems 11 points 20 hours ago* (last edited 20 hours ago)

Maybe It's just CEO dick measuring, so chads Nadella and PIchai can both claim a rock hard 20-30% while virgin Zuckeberg is exposed as not even knowing how to put the condom on.

Microsoft CTO Kevin Scott previously said he expects 95% of all code to be AI-generated by 2030.

Of course he did.

The Microsoft CEO said the company was seeing mixed results in AI-generated code across different languages, with more progress in Python and less in C++.

So the more permissive at compile time the language the better the AI comes out smelling? What a completely unanticipated twist of fate!

[–] BlueMonday1984@awful.systems 5 points 20 hours ago

Either they’re lying and the number is greatly exaggerated (very possible), or this will eventually destroy the company.

I'm thinking the latter - Silicon Valley is full of true believers, after all.

[–] zbyte64@awful.systems 9 points 1 day ago (2 children)
[–] swlabr@awful.systems 10 points 1 day ago (1 children)

Blergh. Just fucking fund public transport and don’t use AI. Easy wins on traffic and efficiency.

[–] YourNetworkIsHaunted@awful.systems 9 points 20 hours ago (2 children)

That's what continually kills me about these bastards. There is so much legitimate low-hanging fruit that they don't have the administrative capacity to follow up on even if they did have the interest and rather than actually pursue any of it they want to further cut their ability to do anything in the vain hole that throwing enough money at tech grifters will magically produce a perfect solution.

[–] swlabr@awful.systems 7 points 16 hours ago

Yeah. After all, Gavin Newsom was created in a test tube to be the perfect liberal career politician. Find obvious areas of concern by co-opting leftist causes, then use that as an excuse to funnel money into corporations. This is common democrat ghoul shit.

[–] Soyweiser@awful.systems 6 points 16 hours ago

Also, I assume it gets even worse, traffic is I think one of those hard problems, those complex coordination problems which we are not great at solving using tech, if either people are their own free agents like cars, or like cars have a mass (that is why you just can't use tcp/ip like stuff, but trains/public transport and the global goods transportation network works a lot better apart from the last mile sort of stuff). AI is not going to be able to do shit. Hell, this is prob going to be a problem like 'im going to make sex simple' (See also the alt text). Just pure AI magical thinking stuff.

Also, use bicycles you cowards. Death to the cult of car.

[–] scruiser@awful.systems 6 points 1 day ago

~~The predictions of slopworld 2035 are coming true!~~

load more comments
view more: next ›