Found a banger in the comments:
BlueMonday1984
Hey, remember the thing that you said would happen?
The part about condemnation and mockery? Yeah, I already thought that was guaranteed, but I didn't expect to be vindicated so soon afterwards.
EDIT: One of the replies gives an example for my "death of value-neutral AI" prediction too, openly calling AI "a weapon of mass destruction" and calling for its abolition.
Managed to stumble across two separate attempts to protect promptfondlers' feelings from getting hurt like they deserve, titled "Shame in the machine: affective accountability and the ethics of AI" and "AI Could Have Written This: Birth of a Classist Slur in Knowledge Work".
I found both of them whilst trawling Bluesky, and they're being universally mocked like they deserve on there.
I don't keep track, I just put these together when I've got an interesting tangent to go on.
Discovered some commentary from Baldur Bjarnason about this:
Somebody linked to the discussion about this on hacker news (boo hiss) and the examples that are cropping up there are amazing
This highlights another issue with generative models that some people have been trying to draw attention to for a while: as bad as they are in English, they are much more error-prone in other languages
(Also IMO Google translate declined substantially when they integrated more LLM-based tech)
On a personal sidenote, I can see non-English text/audio becoming a form of low-background media in and of itself, for two main reasons:
-
First, LLMs' poor performance in languages other than English will make non-English AI slop easier to identify - and, by extension, easier to avoid
-
Second, non-English datasets will (likely) contain less AI slop in general than English datasets - between English being widely used across the world, the tech corps behind this bubble being largely American, and LLM userbases being largely English-speaking, chances are AI slop will be primarily generated in English, with non-English AI slop being a relative rarity.
By extension, knowing a second language will become more valuable as well, as it would allow you to access (and translate) low-background sources that your English-only counterparts cannot.
New science-related development - The NIH Is Capping Research Proposals Because It's Overwhelmed by AI Submissions
Starting this off with a fittingly rage-inducing Twitter thread about an artist getting fucked over by AI
Found a good security-related sneer in response to a low-skill exploit in Google Gemini (tl;dr: "send Gemini a prompt in white-on-white/0px text"):
I've got time, so I'll fire off a sidenote:
In the immediate term, this bubble's gonna be a goldmine of exploits - chatbots/LLMs are practically impossible to secure in any real way, and will likely be the most vulnerable part of any cybersecurity system under most circumstances. A human can resist being socially engineered, but these chatbots can't really resist being jailbroken.
In the longer term, the one-two punch of vibe-coded programs proliferating in the wild (featuring easy-to-find and easy-to-exploit vulnerabilities) and the large scale brain drain/loss of expertise in the tech industry (from juniors failing to gain experience thanks to using LLMs and seniors getting laid off/retiring) will likely set back cybersecurity significantly, making crackers and cybercriminals' jobs a lot easier for at least a few years.
Found a neat tangent whilst going through that thread:
The single most common disciplinary offense on scpwiki for the past year+ has been people posting AI-generated articles, and it is EXTREMELY rare for any of those cases to involve a work that had been positively received
On a personal note, I expect the Foundation to become a reliable source of post-'22 human-made work for the same reasons I stated Newgrounds would recently:
-
An explicit ban on AI slop, which deters AI bros and allow staff to nuke it on sight
-
A complete lack of an ad system, which prevents content farms from setting up shop
-
Dedicated quality control systems (deletion and rewrite policies, in this case) which prevent slop from gaining a foothold and drowning out human-made work
Caught a particularly spectacular AI fuckup in the wild:
(Sidenote: Rest in peace Ozzy - after the long and wild life you had, you've earned it)