this post was submitted on 06 Jul 2025
26 points (100.0% liked)

TechTakes

2057 readers
296 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] swlabr@awful.systems 6 points 5 hours ago (1 children)

Musk objects to the "stochastic parrot" labelling of LLMs. Mostly just the stochastic part.

[–] BigMuffN69@awful.systems 2 points 46 minutes ago

Wake up babe, new alignment technique just dropped: Reinforcement Learning Elon Feedback

[–] maol@awful.systems 7 points 9 hours ago* (last edited 9 hours ago) (1 children)

I have become aware that there is a very right wing Catholic podcaster who has a Catholicism AI chatbot app. It's called Truthly.

Your Trusted Catholic AI Conversation Companion Deepen your understanding, explore ideas, and engage in meaningful dialogue—anytime, anywhere.

If someone could call up Pope Leo and get him to excommunicate the guys who invented this, that would be great.

Nah, we just need to make sure they properly baptise whatever servers it's running on.

[–] swlabr@awful.systems 9 points 11 hours ago (1 children)

Can’t find the angle to spin this out into a grown-up buttcoin post, but if I did, the title would be “Horse_ebutts”.

Anyway: recently I’ve been burdened with the knowledge that there’s a bunch of horse racing related crypto companies. They’re all obviously terrible.

  • Zed Run: a play to earn (P2E) virtual horse NFT racing game. Defunct as of February, probably due to rug pulling, they are pivoting to “Zed Champions”, which is… pretty much the exact same thing, with likely the same fate.
  • EquineChain: a blockchain platform for tracking horse care history, because apparently people don’t trust horse caregivers and need GPUs to remember how much ivermectin and ketamine their show-ponies have mainlined.
  • BTX Racing: a blockchain platform for buying stake in horses. Not sure if you get to choose which cut of the horse you own. Also, not sure if when you liquidate your equine tranche you get cash or a bucket of glue.

Also, insert obligatory stablecoin reference here.

[–] BlueMonday1984@awful.systems 3 points 6 hours ago* (last edited 6 hours ago) (1 children)

Zed Run: a play to earn (P2E) virtual horse NFT racing game. Defunct as of February, probably due to rug pulling, they are pivoting to “Zed Champions”, which is… pretty much the exact same thing, with likely the same fate.

They're also (indirectly) competing with Umamusume: Pretty Derby, which offers zero P2E elements, but does offer horse waifus and actual entertainment value. Needless to say, we both know who's winning this particular fight for people's cash.

EquineChain: a blockchain platform for tracking horse care history, because apparently people don’t trust horse caregivers and need GPUs to remember how much ivermectin and ketamine their show-ponies have mainlined.

It'd arguably be helpful if the caregivers are helping themselves to the stash, but I doubt there's anything stopping then from BSing the blockchain, too.

[–] swlabr@awful.systems 3 points 6 hours ago* (last edited 6 hours ago) (1 children)

They’re also (indirectly) competing with Umamusume: Pretty Derby, which offers zero P2E elements, but does offer horse waifus and actual entertainment value. Needless to say, we both know who’s winning this particular fight for people’s cash.

It's almost as if people don't want to spend money on bland low-poly 3D models of horses and would instead prefer waifu art with surprisingly intricate character design that I definitely do not know anything about*

*I actually do not, but for the bit, pretend that I do and am being defensive

I can honestly say that I have never played Umamusume Pretty Derby ^because^ ^on^ ^my^ ^PC^ ^the^ ^sound^ ^keeps^ ^cutting^ ^out^ ^and^ ^the^ ^cutscenes^ ^don't^ ^play^ ^which^ ^greatly^ ^disappointed^ ^me.^

[–] blakestacey@awful.systems 14 points 18 hours ago (1 children)

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.

womp, hold on let me finish, womp

[–] froztbyte@awful.systems 9 points 17 hours ago* (last edited 12 hours ago) (1 children)

had a quick scan over the blogposts earlier, keen to read the paper

would be nice to see some more studies with more numbers under study, but with the cohort they picked the self-reported vs actual numbers are already quite spicy

[–] dgerard@awful.systems 7 points 12 hours ago

and n=16 handily beats the usual promptfondler n=1

[–] TinyTimmyTokyo@awful.systems 12 points 21 hours ago (1 children)

HN commenters are slobbering all over the new Grok. Virtually every commenter bringing up Grok's recent full-tilt Nazism gets flagged into oblivion.

[–] self@awful.systems 11 points 20 hours ago

this particular abyss just fucking hurts to gaze into

[–] blakestacey@awful.systems 14 points 23 hours ago (2 children)

https://www.lesswrong.com/posts/JspxcjkvBmye4cW4v/asking-for-a-friend-ai-research-protocols

Multiple people are quietly wondering if their AI systems might be conscious. What's the standard advice to give them?

Touch grass. Touch all the grass.

[–] V0ldek@awful.systems 8 points 18 hours ago

What’s the standard advice to give them?

It's unfortunately illegal for me to answer this question earnestly

[–] lagrangeinterpolator@awful.systems 11 points 20 hours ago

Username called "The Dao of Bayes". Bayes's theorem is when you pull the probabilities out of your posterior.

知者不言,言者不知。 He who knows (the Dao) does not (care to) speak (about it); he who is (ever ready to) speak about it does not know it.

[–] gerikson@awful.systems 12 points 1 day ago* (last edited 23 hours ago) (6 children)

LessWrong's descent into right-wing tradwife territory continues

https://www.lesswrong.com/posts/tdQuoXsbW6LnxYqHx/annapurna-s-shortform?commentId=ueRbTvnB2DJ5fJcdH

Annapurna (member for 5 years, 946 karma):

Why is there so little discussion about the loss of status of stay at home parenting?

First comment is from user Shankar Sivarajan, member for 6 years, 1227 karma

https://www.lesswrong.com/posts/tdQuoXsbW6LnxYqHx/annapurna-s-shortform?commentId=opzGgbqGxHrr8gvxT

Well, you could make it so the only plausible path to career advancement for women beyond, say, receptionist, is the provision of sexual favors. I expect that will lower the status of women in high-level positions sufficiently to elevate stay-at-home motherhood.

[...]

EDIT: From the downvotes, I gather people want magical thinking instead of actual implementable solutions.

Granted, this got a strong disagree from the others and a tut-tut from Habryka, but it's still there as of now and not yeeted into the sun. And rats wonder why people don't want to date them.

[–] blakestacey@awful.systems 12 points 23 hours ago

Dorkus malorkus alert:

When my grandmother quit being a nurse to become a stay at home mother, it was seen like a great thing. She gained status over her sisters, who stayed single and in their careers.

Fitting into your societal pigeonhole is not the same as gaining status, ya doofus.

[–] blakestacey@awful.systems 9 points 23 hours ago (1 children)

Another comment that has been getting downvotes and tut-tuts begins,

The only thing that will raise fertility rates is to make it more affordable to have a child.

(Robot Santa voice) Wanting all women to be barefoot and pregnant in the kitchen? Evil! Not providing footnotes in your reply to a blog post? EXACTLY AS EVIL

[–] gerikson@awful.systems 12 points 23 hours ago

LOL the mod gets snippy here too

This comment too is not fit for this site. What is going on with y'all? Why is fertility such a weirdly mindkilling issue?

"Why are there so many Nazis in my Nazi bar????"

[–] blakestacey@awful.systems 9 points 23 hours ago

Any time somebody edits a post to talk about the downvotes, it's cursed gold.

[–] spiqueras@fosstodon.org 10 points 1 day ago

@gerikson @BlueMonday1984 "actual implementable solution" what the fuck is wrong with these people

[–] gerikson@awful.systems 8 points 1 day ago
[–] Architeuthis@awful.systems 5 points 1 day ago* (last edited 1 day ago) (1 children)

It's possible we may be catching sight of the first shy movements towards a pivot to robotics:

https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/nano-super-developer-kit/

https://techcrunch.com/2025/07/09/hugging-face-opens-up-orders-for-its-reachy-mini-desktop-robots/

Both developer kits, because it's always a maybe the clients will figure something out type of business model these days.

[–] istewart@awful.systems 5 points 23 hours ago (1 children)

But how are they going to awkwardly cram robots in everywhere, to follow up the overwhelming success of AI? Self-crashing cars are a gimme, but maybe a "sealed for your protection" Amazon locker with a robot arm that handles the package for you?

I was in LA this time a couple years ago, and some robot delivery startup had already left their little motorized shopping carts littering the sidewalks around Hollywood. I never saw them moving, they just sat there almost like they were abandoned.

[–] BlueMonday1984@awful.systems 4 points 14 hours ago

But how are they going to awkwardly cram robots in everywhere, to follow up the overwhelming success of AI?

Good question - AFAICT, they're gonna struggle to find places to cram their bubble-bots into. Plus, nothing's gonna stop Joe Public from wrecking them in the streets - and given we've already seen Waymos getting torched and Lime scooters getting wrecked these AI-linked 'bots are likely next on the chopping block.

[–] bitofhope@awful.systems 14 points 2 days ago (1 children)

Today's bullshit that annoys me: Wikiwand. From what I can tell their grift is that it's just a shitty UI wrapper for Wikipedia that sells your data to who the fuck knows to make money for some Israeli shop. Also they SEO the fuck out of their stupid site so that every time I search for something that has a Finnish wikipedia page, the search results also contain a pointless shittier duplicate result from wikiwand dot com. Has anyone done a deeper investigation into what their deal is or at least some kind of rant I could indulge in for catharsis?

[–] istewart@awful.systems 5 points 1 day ago

I've seen conspiracy theories that a lot of the ad buys for stuff like this are a new avenue of money laundering, focusing on stuff like pirate sports streaming sites, sketchy torrent sites, etc. But a full scraped, SEOd Wikipedia clone also fits.

[–] Seminar2250@awful.systems 13 points 2 days ago (3 children)

trying to explain why a philosophy background is especially useful for computer scientists now, so i googled "physiognomy ai" and now i hate myself

https://www.physiognomy.ai/

Discover Yourself with Physiognomy.ai

Explore personal insights and self-awareness through the art of face reading, powered by cutting-edge AI technology.

At Physiognomy.ai, we bring together the ancient wisdom of face reading with the power of artificial intelligence to offer personalized insights into your character, strengths, and areas for growth. Our mission is to help you explore the deeper aspects of yourself through a modern lens, combining tradition with cutting-edge technology.

Whether you're seeking personal reflection, self-awareness, or simply curious about the art of physiognomy, our AI-driven analysis provides a unique, objective perspective that helps you better understand your personality and life journey.

[–] mountainriver@awful.systems 6 points 2 days ago (1 children)

Prices ranging from 18 to 168 USD (why not 19 to 199? Number magic?) But then you get integrated approach of both Western and Chinese physiognomy. Two for one!

Thanks, I hate it!

[–] Seminar2250@awful.systems 4 points 1 day ago* (last edited 1 day ago) (1 children)

Number magic?

they use numerology.ai as a backend

"we encode shit as numbers in an arbitrary way and then copy-paste it into chatgpt"

[–] fullsquare@awful.systems 2 points 2 hours ago

whyyyyy it's a real site

[–] o7___o7@awful.systems 11 points 2 days ago* (last edited 2 days ago) (1 children)

The web is often Dead Dove in a Bag as a Service innit?

[–] Seminar2250@awful.systems 4 points 1 day ago
[–] BlueMonday1984@awful.systems 7 points 2 days ago

trying to explain why a philosophy background is especially useful for computer scientists now, so i googled “physiognomy ai” and now i hate myself

Well, I guess there's your answer - "philosophy teaches you how to avoid falling for hucksters"

[–] blakestacey@awful.systems 14 points 3 days ago (2 children)

In the morning: we are thrilled to announce this new opportunity for AI in the classroom

In the afternoon:

Someone finally flipped a switch. As of a few minutes ago, Grok is now posting far less often on Hitler, and condemning the Nazis when it does, while claiming that the screenshots people show it of what it's been saying all afternoon are fakes.

load more comments (2 replies)
[–] o7___o7@awful.systems 13 points 3 days ago* (last edited 3 days ago) (3 children)

A Supabase employee pleads with his software to not leak its SQL database like a parent pleads with a cranky toddler in a toy store.

https://news.ycombinator.com/item?id=44502318

load more comments (3 replies)
[–] wizardbeard@lemmy.dbzer0.com 4 points 2 days ago (2 children)

A company that makes learning material to help people learn to code made a test of programming basics for devs to find out if their basic skills have atrophied after use of AI. They posted it on HN: https://news.ycombinator.com/item?id=44507369

Not a lot of engagement yet, but so far there is one comment about the actual test content, one shitposty joke, and six comments whining about how the concept of the test itself is totally invalid how dare you.

[–] FredFig@awful.systems 5 points 2 days ago

Looks like it's been downranked into hell for being too mean to the AI guys, which is weird when its literally an AI guy promoting his AI generated trash.

[–] V0ldek@awful.systems 5 points 2 days ago (1 children)

It seems that the test itself is generated by autoplag? At least that's how I understand the PS and one of the comments about "vibe regression" in response to an error

[–] V0ldek@awful.systems 5 points 2 days ago

Anyway, they say it covers Node and to any question regarding Node the answer is "no", I don't need an AI to know webdev fundamentals

[–] BlueMonday1984@awful.systems 17 points 3 days ago (2 children)

Another day, another jailbreak method - a new method called InfoFlood has just been revealed, which involves taking a regular prompt and making it thesaurus-exhaustingly verbose.

In simpler terms, it jailbreaks LLMs by speaking in Business Bro.

I mean, decontextualizing and obscuring the meanings of statements in order to permit conduct that would in ordinary circumstances breach basic ethical principles is arguably the primary purpose of deploying the specific forms and features that comprise "Business English" - if anything, the fact that LLM models are similarly prone to ignore their "conscience" and follow orders when deciding and understanding them requires enough mental resources to exhaust them is an argument in favor of the anthropomorphic view.

Or:

Shit, isn't the whole point of Business Bro language to make evil shit sound less evil?

load more comments (1 replies)
load more comments
view more: next ›