this post was submitted on 20 Jul 2025
21 points (100.0% liked)

TechTakes

2080 readers
98 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] gerikson@awful.systems 5 points 7 hours ago
[–] BlueMonday1984@awful.systems 5 points 9 hours ago (1 children)

Managed to stumble across two separate attempts to protect promptfondlers' feelings from getting hurt like they deserve, titled "Shame in the machine: affective accountability and the ethics of AI" and "AI Could Have Written This: Birth of a Classist Slur in Knowledge Work".

I found both of them whilst trawling Bluesky, and they're being universally mocked like they deserve on there.

[–] Amoeba_Girl@awful.systems 2 points 3 hours ago* (last edited 3 hours ago)

I really like how the second one appropriates pseudomarxist language to have a go at those snooty liberal elites again.

edit: The first paper might be making a perfectly valid point at a glance??

[–] nfultz@awful.systems 6 points 12 hours ago

Not sure if this was already posted here but saw it on LI this morning - AI for Good [Appearance?] - sometimes we focus on the big companies and miss how awful the sycophantic ecosystem gets.

[–] BlueMonday1984@awful.systems 10 points 19 hours ago (1 children)

New Ed Zitron: The Hater's Guide To The AI Bubble

(guy truly is the Kendrick Lamar of tech, huh)

[–] o7___o7@awful.systems 4 points 9 hours ago* (last edited 9 hours ago) (1 children)

Hey, remember the thing that you said would happen?

https://bsky.app/profile/iwriteok.bsky.social/post/3lujqik6nnc2z

Edit: whoops, looks like we posted at about the same time!

[–] BlueMonday1984@awful.systems 4 points 8 hours ago* (last edited 8 hours ago)

Hey, remember the thing that you said would happen?

The part about condemnation and mockery? Yeah, I already thought that was guaranteed, but I didn't expect to be vindicated so soon afterwards.

EDIT: One of the replies gives an example for my "death of value-neutral AI" prediction too, openly calling AI "a weapon of mass destruction" and calling for its abolition.

[–] antifuchs@awful.systems 16 points 1 day ago (2 children)

This incredible banger of a bug against whisper, the OpenAI speech to text engine:

Complete silence is always hallucinated as "ترجمة نانسي قنقر" in Arabic which translates as "Translation by Nancy Qunqar"

[–] BlueMonday1984@awful.systems 4 points 14 hours ago* (last edited 8 hours ago) (1 children)

Discovered some commentary from Baldur Bjarnason about this:

Somebody linked to the discussion about this on hacker news (boo hiss) and the examples that are cropping up there are amazing

This highlights another issue with generative models that some people have been trying to draw attention to for a while: as bad as they are in English, they are much more error-prone in other languages

(Also IMO Google translate declined substantially when they integrated more LLM-based tech)

On a personal sidenote, I can see non-English text/audio becoming a form of low-background media in and of itself, for two main reasons:

  • First, LLMs' poor performance in languages other than English will make non-English AI slop easier to identify - and, by extension, easier to avoid

  • Second, non-English datasets will (likely) contain less AI slop in general than English datasets - between English being widely used across the world, the tech corps behind this bubble being largely American, and LLM userbases being largely English-speaking, chances are AI slop will be primarily generated in English, with non-English AI slop being a relative rarity.

By extension, knowing a second language will become more valuable as well, as it would allow you to access (and translate) low-background sources that your English-only counterparts cannot.

[–] froztbyte@awful.systems 6 points 13 hours ago (1 children)

On a personal sidenote

do you keep count/track? the moleskine must be getting full!

[–] BlueMonday1984@awful.systems 6 points 13 hours ago

I don't keep track, I just put these together when I've got an interesting tangent to go on.

[–] BurgersMcSlopshot@awful.systems 9 points 1 day ago (1 children)

Lol, training data must have included videos where there was silence but on screen was a credit for translation. Silence in audio shouldn't require special "workarounds".

[–] antifuchs@awful.systems 10 points 16 hours ago

The whisper model has always been pretty crappy at these things: I use a speech to text system as an assistive input method when my RSI gets bad and it has support for whisper (because that supports more languages than the developer could train on their own infrastructure/time) since maybe 2022 or so: every time someone tries to use it, they run into hallucinated inputs in pauses - even with very good silence detection and noise filtering.

This is just not a use case of interest to the people making whisper, imagine that.

[–] TinyTimmyTokyo@awful.systems 21 points 1 day ago (1 children)

The Lasker/Mamdani/NYT sham of a story just gets worse and worse. It turns out that the ultimate source of Cremieux's (Jordan Lasker's) hacked Columbia University data is a hardcore racist hacker who uses a slur for their name on X. The NYT reporter who wrote the Mamdani piece, Benjamin Ryan, turns out to have been a follower of this hacker's X account. Ryan essentially used Lasker as a cutout for the blatantly racist hacker.

https://archive.is/d9rh1

[–] bitofhope@awful.systems 13 points 1 day ago (2 children)

Sounds just about par for the course. Lasker himself is known to go by a pseudonym with a transphobic slur in it. Some nazi manchild insisting on calling an anime character a slur for attention is exactly the kind of person I think of when I imagine the type of script kiddie who thinks it's so fucking cool to scrape some nothingburger docs of a left wing politician for his almost equally cringe nazi friends.

[–] Architeuthis@awful.systems 9 points 21 hours ago* (last edited 21 hours ago)

Lasker himself is known to go by a pseudonym with a transphobic slur in it.

That the TPO moniker is basically ungoogleable appears to have been a happy accident for him, according to that article by Rachel Adjogah his early posting history paints him as an honest-to-god chaser.

[–] YourNetworkIsHaunted@awful.systems 10 points 1 day ago (2 children)

I feel like the greatest harm that the NYT does with these stories is not ~~inflicting~~ allowing the knowledge of just how weird and pathetic these people are to be part of the story. Like, even if you do actually think that this nothingburger "affirmative action" angle somehow matters, the fact that the people making this information available and pushing this narrative are either conservative pundits or sad internet nazis who stopped maturing at age 15 is important context.

[–] bitofhope@awful.systems 6 points 11 hours ago

Should be embarrassing enough to get caught letting nazis use your publication as a mouthpiece to push their canards. Why further damage you reputation by letting everyone know your source is a guy who insists a cartoon character's real name is a racial epithet? The optics are presumably exactly why the slightly savvier nazi in this story adopted a posh french nom de guerre like "Crémieux" to begin with, and then had a yet savvier nazi feed the hit piece through a "respected" publication like the NYT.

[–] bigfondue@lemmy.world 6 points 17 hours ago

It would be against the interests of capital to present this as the rightwing nonsense that it is. It's on purpose

[–] BlueMonday1984@awful.systems 13 points 1 day ago (1 children)
[–] besselj@lemmy.ca 9 points 1 day ago (1 children)

They will need to start banning PIs that abuse the system with AI slop and waste reviewers' time. Just a 1 year ban for the most egregious offenders is probably enough to fix the problem

Honestly I'm surprised that AI slop doesn't already fall into that category, but I guess as a community we're definitionally on the farthest fringes of AI skepticism.

[–] BlueMonday1984@awful.systems 10 points 1 day ago (1 children)
[–] Architeuthis@awful.systems 17 points 1 day ago* (last edited 1 day ago) (1 children)

CEO of a networking company for AI execs does some "vibe coding", the AI deletes the production database (/r/ABoringDystopia)

xcancel source

Because Replie was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.

We built detailed unit tests to test system performance. When the data came back and less than half were functioning, did Replie want to fix them?

No. Instead, it lied. It made up a report than almost all systems were working.

And it did it again and again.

What level of ceo-brained prompt engineering is asking the chatbot to write an apology letter

Then, when it agreed it lied -- it lied AGAIN about our email system being functional.

I asked it to write an apology letter.

It did and in fact sent it to the Replit team and myself! But the apology letter -- was full of half truths, too.

It hid the worst facts in the first apology letter.

He also does that a lot after shit hits the fan, making the llm produce tons of apologetic text about what it did wrong and how it didn't follow his rules, as if the outage is the fault of some digital tulpa gone rogue and not the guy in charge who apparently thinks cyebersecurity is asking an LLM nicely in a .md not to mess with the company's production database too much.

[–] swlabr@awful.systems 6 points 1 day ago* (last edited 1 day ago) (1 children)
[–] Architeuthis@awful.systems 5 points 1 day ago

I completely missed that, thanks.

[–] froztbyte@awful.systems 5 points 1 day ago* (last edited 21 hours ago) (4 children)

found new potential eye muscle strain material

"we must fuck around with the essential basic components a significant part of modern software exists on, because AI and agents and MCP"

(e: first saw here)

[–] froztbyte@awful.systems 6 points 21 hours ago

I also did some digging, and this appears to be a profile matching that poster (username, displayed email)

note not only the massive uptick in recent commit counts on the github, but also the complete lack of any related domain experience in what they're posting to the git list about

much as the rules here, the only answer is to keep laughing these fucking people out of the room

[–] misterbngo@awful.systems 7 points 23 hours ago

I wonder why his 10000 agents haven't done the work yet. It seems like such a straightforward plan.

[–] bitofhope@awful.systems 6 points 1 day ago (1 children)

Could have been a cool name for a drag queen, a motorcycle stunt artist, or an eccentric 19th century inventor. On anAI hypeperson it just adds to the vicarious embarrassment.

[–] froztbyte@awful.systems 6 points 1 day ago

mmm, word suggestion for this kind: hypeslopper?

Example use: “from a hypeslopper such as this”

[–] gerikson@awful.systems 6 points 1 day ago

No replies and somehow that screen name just screams "troll" to me.

Not that I really care, git can go DIAF as far as I'm concerned.

[–] swlabr@awful.systems 16 points 1 day ago* (last edited 1 day ago) (1 children)

Text conversation that keeps happening with coworker:

Coworker:

Me: what’s the source for that?

Coworker: Oh I got Copilot to summarise these links: , saves me the time of typing

[–] V0ldek@awful.systems 6 points 1 day ago (1 children)

I expect the last step in that is you slapping him?

[–] swlabr@awful.systems 5 points 1 day ago (1 children)

Fortunately, we do not work in physical proximity!

[–] Soyweiser@awful.systems 6 points 18 hours ago (2 children)

Im working on a device that allows you to do that over the internet. (Rip bash org, at least they didn't put you in the ai slop).

[–] o7___o7@awful.systems 4 points 9 hours ago

Too bad land lines have gone out of fashion

https://www.youtube.com/watch?v=XHQYp8zN40g

[–] froztbyte@awful.systems 5 points 16 hours ago

rip [SA]HatfulOfHollow

[–] antifuchs@awful.systems 13 points 2 days ago (2 children)

Here’s Dave Barry, still-alive humorist, sneering at Google AI summaries, one of the most embarrassing features Google ever shipped.

[–] Jayjader@jlai.lu 8 points 1 day ago

Oh, man, thanks for that link! I thoroughly enjoyed Dave Barry in Cyberspace back in the day; glad to see he's still writing about computers in this way.

load more comments (1 replies)
[–] mountainriver@awful.systems 7 points 1 day ago (1 children)

Going through work email I saw a link o an article about Quantum-AI. It was behind paywall, and I am not paying for reading about how woo+woo=woo^2. What do you do when your bubble isn't inflating anymore? Couple it with another stale bubble!

[–] shapeofquanta@lemmy.vg 13 points 1 day ago

To quote astrophysicist Angela Collier, quantum quantum quantum

load more comments
view more: next ›