Opening up the sack with your new favourite uwu news influencer giving a quick shout-out to our old pals, the NRx. Hoped that we wouldn’t get here, but here we are, regardless.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
I didn't know that uwu news influencer was a thing. Kind of a clash between style and topic there, but hey whatever gets the word out.
I didn’t know that uwu news influencer was a thing.
It's probably a thing where if you start thinking about it, it's always been around, but we've just never had the right vocabulary to describe it.
I didn’t know that uwu news influencer was a thing.
Same, and also I'm still trying to process that "uwu" breached out of furry spaces and became a widely understood term. (Although I'm not entirely sure what way it took, it's also possible that it breached out of anime-related communities. Maybe some day cyber-archeologists can figure this out.)
So far away we wait for the AGI
For the billions all wasted and gone
We feel the pain of compute time lost in few thousand days
Through the sneering and the flames we carry on
Got two major pieces to show which caught my attention:
- The ‘white-collar bloodbath’ is all part of the AI hype machine, a rare moment of genuine criticism popping up in the mainstream press (CNN, to be specific)
- AI model collapse is not what we paid for, an opinion piece from the Register where Steven J. Vaughan-Nichols (who previously boosted Perplexity) complains about the declining quality of AI search
Pretty good summary of why Alex Karp is as much a horrible fucking shithead as Thiel.
https://www.thenation.com/article/culture/alex-karp-palantir-tech-republic/tnamp/
Further evidence emerging that the effort to replace government employees with the Great Confabulatron are well at hand and the presumed first-order goal of getting a yes-man to sign off on whatever bullshit is going well.
Now we wait for the actual policy implications and the predictable second-order effects. Which is to say dead kids.
In an completely unprecedented turn of events, the word prediction machine has a hard time predicting numbers.
https://www.wired.com/story/google-ai-overviews-says-its-still-2024/
New Bluesky post from Baldur Bjarnason:
What’s missing from the now ubiquitous “LLMs are good for code” is that code is a liability. The purpose of software is to accomplish goals with the minimal amount of code that’s realistically possible
LLMs may be good for code, but they seem to be a genuine hazard for collaborative software dev
Loose Mission Impossible Spoilers
The latest Mission Impossible movie features a rogue AI as one of the main antagonists. But on the other hand, the AI's main powers are lies, fake news, and manipulation, and it only gets as far as it does because people allow fear to make themselves manipulable and it relies on human agents to do a lot of its work. So in terms of promoting the doomerism narrative, I think the movie could actually be taken as opposing the conventional doomer narrative in favor of a calm, moderate, internationally coordinated (the entire plot could have been derailed by governments agreeing on mutual nuclear disarmament before the AI subverted them) response against AI's that ultimately have only moderate power.
Adding to the post-LLM hype predictions: I think post LLM bubble popping, "Terminator" style rogue AI movie plots don't go away, but take on a different spin. Rogue AI's strength's are going to be narrower, their weaknesses are going to get more comical and absurd, and idiotic human actions are going to be more of a factor. For weaknesses it will be less "failed to comprehend love" or "cleverly constructed logic bomb breaks its reasoning" and more "forgets what it was doing after getting drawn into too long of a conversation". For human actions it will be less "its makers failed to anticipate a completely unprecedented sequence of bootstrapping and self improvement" and more "its makers disabled every safety and granted it every resource it asked for in the process of trying to make an extra dollar a little bit faster".
I hate I'm so terminally online I found out about the rumor that Musk and Stephen Miller's wife are bumping uglies through a horrorfic parody account
https://mastodon.social/@bitterkarella@sfba.social/114593332907413196
New artcle from Brian Merchant: An 'always on' OpenAI device is a massive backlash waiting to happen
Giving my personal thoughts on the upcoming OpenAI Device^tm^, I think Merchant's correct to expect mass-scale backlash against the Device^tm^ and public shaming/ostracisation of anyone who decides to use it - especially considering its an explicit repeat of the widely clowned on Humane AI Pin.
headlines of Device^tm^ wearers getting their asses beaten in the street to follow soon afterwards. As Brian's noted, a lot of people would see wearing an OpenAI Device^tm^ as an open show of contempt for others, and between AI's public image becoming utterly fouled by the bubble and Silicon Valley's reputation going into the toilet, I can see someone seeing a Device^tm^ wearer as an opportunity to take their well-justified anger at tech corps out on someone who openly and willingly bootlicks for them.
This is more on Aella Cult than on her tbh.
And didn't Aella and also Grimes have a come to jesus moment when they realized they hung out with a lot of bad people. Guess nothing came from that. (That part of them is always worse than these just a bit off antics they pull).