TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Discovered new manmade horrors beyond my comprehension today (recommend reading the whole thread, it goes into a lot of depth on this shit):
"The home of 1999" already beat them to that.
Found an AI bro making an incoherent defense of AI slop today (fitting that he previously shilled NFTs):
Needless to say, he's getting dunked on in the replies and QRTs, because people like him are fundamentally incapable of being punk.
Yes, doing the thing which the entire business world is pouring billions into and trying their hardest to shove onto everyone to maximize imagined future profits, that's what counterculture is all about.
Making art with the help of tech billionaires is so punk rock man!
Wikipedia has higher standards than the American HIstorical Association. Let's all let that sink in for a minute.
Image should be clearly marked as AI generated and with explicit discussion as to how the image was created. Images should not be shared beyond the classroom
This point stood out to me as particularly bizarre. Either the image is garbage in which case it shouldn't be shared in the classroom either because school students deserve basic respect, good material, and to be held to the same standards as anyone else; or it isn't garbage and then what are you so ashamed of AHA?
Wikipedia also just upped their standards in another area - they've updated their speedy deletion policy, enabling the admins to bypass standard Wikipedia bureaucracy and swiftly nuke AI slop articles which meet one of two conditions:
-
"Communication intended for the user”, referring to sentences directly aimed at the promptfondler using the LLM (e.g. "Here is your Wikipedia article on…,” “Up to my last training update …,” and "as a large language model.”)
-
Blatantly incorrect citations (examples given are external links to papers/books which don't exist, and links which lead to something completely unrelated)
Ilyas Lebleu, who contributed to the update in policy, has described this as a "band-aid" that leaves Wikipedia in a better position than before, but not a perfect one. Personally, I expect this solution will be sufficent to permanently stop the influx of AI slop articles. Between promptfondlers' utter inability to recognise low-quality/incorrect citations, and their severe laziness and lack of care for their """work""", the risk of an AI slop article being sufficiently subtle to avoid speedy deletion is virtually zero.
ChatControl is back on the table here in Europe AGAIN (you've probably heard), with mandatory age checking sprinkled on to as a treat.
I honestly feel physically ill at this point. Like a constant, unignorable digital angst eating away at my sanity. I don't want any part in this shit anymore.
ChatControl in the EU, the Online Safety Act in the UK, Australia's age gate for social media, a boatload of censorious state laws here in the US and staring down the barrel of KOSA... yeah.
Yes, of course, it's everywhere. What's left but becoming a hermit...?
But you know what makes me extra mad about the age restrictions? I don't think they are a bad idea per se. Keeping teens from watching porn or kids from spending most of their waking hours on brainrot on social media is, in and on itself, a good idea. What does make me mad is that this could easily be done in a privacy-respecting fashion (towards site providers and governments simultaneously). The fact that it isn't - that you'll need to share your real, passport-backed identity with a bunch of sites - tells you everything you need to know about these endeavors, I think.
Cloudflare has publicly announced the obvious about Perplexity stealing people's data to run their plagiarism, and responded by de-listing them as a verified bot and added heuristics specifically to block their crawling attempts.
Personally, I'm expecting this will significantly hamper Perpllexity going forward, considering Cloudflare's just cut them off from roughly a fifth of the Internet.
Recently, I've been seeing a lot of adverts from Google about their AI services. What really tickles me is how defeatist the campaign seems. Every ad is basically like "AI can't do X, but it can do Y!", where X is a job or other task that AI bros are certain that AI will eventually replace, and Y is a smaller, related thing that AI gets wrong anyway. For an ad agency, I'd expect more than this.
Found someone trying to fire back at the widespread sneering against promptfondlers:
Emphasis on "trying" here - they're getting cooked in the replies and QRTs. Here's a couple highlights - one from someone running an escape room, and one which allegedly ended in someone meeting a baseballer:
today I learnt that there are actual people irl who get genuinely upset if they catch you shoveling hot shit into your mouth. like they view it as a serious moral failing in certain circles
To be fair to baseball girl, I've found "what's this thing I know but I forgot the name of" one of the best use cases for chatbots, because web search is too fucked to help you with it now. It sucks that it's the case, but it has sadly helped me like, a couple of times (and after I insulted and redirected the chatbot when it inevitably gave me a shit initial answer).
A nice long essay by Freddie deBoer for our holiday week: the release of GPT-5; I wholly recommend reading the whole thing!
https://freddiedeboer.substack.com/p/the-rage-of-the-ai-guy
Choice snippet to whet your appetites:
"With all of this, I’m only asking you to observe the world around you and report back on whether revolutionary change has in fact happened. I understand, we are still very early in the history of LLMs. Maybe they’ll actually change the world, the way they’re projected to. But, look, within a quarter-century of the automobile becoming available as a mass consumer technology, its adoption had utterly changed the lived environment of the United States. You only had to walk outside to see the changes they had wrought. So too with electrification: if you went to the top of a hill overlooking a town at night pre-electrification, then went again after that town electrified, you’d see the immensity of that change with your own two eyes. Compare the maternal death rate in 1800 with the maternal death rate in 2000 and you will see what epoch-changing technological advance looks like. Consider how slowly the news of King William IV’s death spread throughout the world in 1837 and then look at how quickly the news of his successor Queen Victoria’s death spread in 1901, to see truly remarkable change via technology. AI chatbots and shitty clickbait videos choking the social internet do not rate in that context, I’m sorry. I will be impressed with the changes wrought by the supposed AI era when you can show me those changes rather than telling me that they’re going to happen. Show me. Show me!"
Scandals like that of Builder.ai - which should have their own code word, IAJI (It’s Actually Just Indians) - become more and more common[...]
This is just a strictly worse version of David's AGI (A Guy in India) sneer.
It’s history; sometimes stuff just doesn’t happen. And precisely because saying so is less fun than the alternative, some of us have to.
Freddy is clearly gesturing at a critique of a kind of Whig history here, and I fully agree but think his overall implications (at least so far) are off-base. He seems to be arguing that AI-based technological processes are not inevitable and that the political, economic, and social worlds are not actually required by physical necessity to follow the course predicted by its modern prophets of doom. But I think the appropriate followup to this understanding of history is that things, broadly speaking, don't just happen. History is experienced in the active voice, not the passive, and people doing things now is what can shape the kind of future we get. In as much as the Internet was coopted by capitalism and turned into its present form, that should be understood as a consequence of decisions people made at the time. We can understand the reasons for those decisions and why they didn't choose differently to carry us down alternate paths, but that should not deny their agency, lest we lose sight of our own.
like with the terrorist group isil, you should not give it to freddie de fucking boer.
I'm ignorant- give me the lore drop.
Back in 2017, Freddie lost an argument with Malcolm Harris, lobbed some completely made-up sexual harassment allegations against Harris, and then blamed the whole thing on a bipolar episode. Nowadays he just makes up professors to get mad at.
According to wikipedia he's a eugenics enjoyer. Another W for nominative determinism I guess.
Hey come now, it is a common Dutch last name. Don't slander Dutch people who are called boer (farmer) like that. Do it for the right reasons. We all suck, no matter our last names.
Wasn't the original designation of Boers (as in the Boer war) a denigrating term?
No idea. Prob best to check wikipedia for that. Could be last names, could be occupation, could be some denigrating term.
Looks like it's an endonym, or was at the time. OFC the reason for the Great Trek was that the boers were pissed they couldn't have slaves anymore while under British rule. Charming people all around.
Oof, thanks for the heads up!
Explains his gushing over Scott in the intro.
I still think he makes a lot of good points in that promptfondlers are losing their shit because people aren't buyin the swill they're selling.
In a similar vein, check out this comment on LW.
[on "starting an independent org to research/verify the claims of embryo selection companies"] I see how it "feels" worth doing, but I don't think that intuition survives analysis.
Very few realistic timelines now include the next generation contributing to solving alignment. If we get it wrong, the next generation's capabilities are irrelevant, and if we get it right, they're still probably irrelevant. I feel like these sorts of projects imply not believing in ASI. This is standard for most of the world, but I am puzzled how LessWrong regulars could still coherently hold that view.
https://www.lesswrong.com/posts/hhbibJGt2aQqKJLb7/shortform-1?commentId=25HfwcGxC3Gxy9sHi
So belieiving in the inevitable coming of the robot god is dogma on LW now. This is a cult.
Also note the standard error that people make (Which Rationalists talk about but people also never seem to internalize) Scott is wise and knowledgeable, until we get to a thing DeBoer knows about, and then it suddenly is strange and out of place. But the wise and knowledgeable thing doesn't get revalued, which considering the complaint about Scott is a bit of a lack of selfawareness moment. Almost like something that looks like it is written in wise and knowledgeable and pleasing way doesn't have to be that. Anyway, sorry for the sidetrack we were talking about genAI and this is not relevant to that.
Gell-Mann amnesia.
Guh wtf
I feel ya, I got got by that Sam Kriss piece dunking on hpmor last week.
Could be worse, I got got by some neofascist scum who was dunking on rationalists.
Never trust a guy with a substack
I'm waiting for the day substack puts RSS access behind a paywall. Unfortunately some decent blogs are still on that platform
I was thinking about that in reaction to something else. Anytime somebody casually brings up chrischan (she has been a decades long stalking/harassement target) should have been a bit of a red flag.
(E: not blaming people for missing that btw, it just stood out to me).
Ran across a pretty solid sneer: Every Reason Why I Hate AI and You Should Too.
Found a particularly notable paragraph near the end, focusing on the people focusing on "prompt engineering":
In fear of being replaced by the hypothetical ‘AI-accelerated employee’, people are forgoing acquiring essential skills and deep knowledge, instead choosing to focus on “prompt engineering”. It’s somewhat ironic, because if AGI happens there will be no need for ‘prompt-engineers’. And if it doesn’t, the people with only surface level knowledge who cannot perform tasks without the help of AI will be extremely abundant, and thus extremely replaceable.
You want my take, I'd personally go further and say the people who can't perform tasks without AI will wind up borderline-unemployable once this bubble bursts - they're gonna need a highly expensive chatbot to do anything at all, they're gonna be less productive than AI-abstaining workers whilst falsely believing they're more productive, they're gonna be hated by their coworkers for using AI, and they're gonna flounder if forced to come up with a novel/creative idea.
All in all, any promptfondlers still existing after the bubble will likely be fired swiftly and struggle to find new work, as they end up becoming significant drags to any company's bottom line.
Promptfondling really does feel like the dumbest possible middle ground. If you're willing to spend the time and energy learning how to define things with the kind of language and detail that allows a computer to effectively work on them, we already have tools for that: they're called programming languages. Past a certain point trying to optimize your "natural language" prompts to improve your odds from the LLM gacha you're doing the digital equivalent sot trying to speak a foreign language by repeating yourself louder and slower.