this post was submitted on 05 Jun 2025
46 points (100.0% liked)

TechTakes

1910 readers
82 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

(Archive)

Tickled pink that BI has decided to platform the AI safety chuds. OFC, the more probable reason of “more dosh” gets mentioned, but most of the article is about how Anthropic is more receptive to addressing AI safety and alignment.

top 9 comments
sorted by: hot top controversial new old

As usual the tech media fails to consider the possibility that part of the reason for Anthropic poaching people with promises of more money and huffable farts is to get this exact headline to try and get another round of funding from the VCs.

[–] danekrae@lemmy.world 35 points 2 days ago (2 children)

Tickled pink that BI has decided to platform the AI safety chuds. OFC, the more probable reason of “more dosh” gets mentioned, but most of the article is about how Anthropic is more receptive to addressing AI safety and alignment.

Seems that I don't understand the english language anymore.

[–] scruiser@awful.systems 18 points 2 days ago (3 children)
  • "tickled pink" is a saying for finding something humorous

  • "BI" is business insider, the newspaper that has the linked article

  • "chuds" is a term of online alt-right losers

  • OFC: of fucking course

  • "more dosh" mean more money

  • "AI safety and alignment" is the standard thing we sneer at here: making sure the coming future acasual robot god is a benevolent god. Occasionally reporter misunderstand it to mean or more PR-savvy promptfarmers misrepresent it to mean stuff like stopping LLMs from saying racist shit or giving you recipes that would accidentally poison you but this isn't it's central meaning. (To give the AI safety and alignment cultists way too much charity, making LLMs not say racist shit or give harmful instructions has been something of a spin-off application of their plans and ideas to "align" AGI.)

[–] diz@awful.systems 9 points 1 day ago

making LLMs not say racist shit

That is so 2024. The new big thing is making LLMs say racist shit.

[–] swlabr@awful.systems 9 points 1 day ago

tbh I've always read OFC as just "OF Course". Dunno why I never thought there'd be fucking in there.

[–] danekrae@lemmy.world 13 points 2 days ago

I'm old, the future sucks.

[–] swlabr@awful.systems 10 points 2 days ago

as has been said since time immemorial, lurk moar

[–] LostWanderer@lemmynsfw.com 10 points 2 days ago

Man, Business Insider is also huffing their own farts by using AI to clean up amateur articles and use them on their website Business Insider Fart Huffing Evidence Personally, I consider Business Insider KIA and use another source that doesn't give them clicks.

[–] TacoButtPlug@sh.itjust.works 4 points 2 days ago

Won't last with all the rot going to Anthropic