this post was submitted on 13 Jun 2025
77 points (100.0% liked)

SneerClub

1122 readers
89 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

jesus this is gross man

you are viewing a single comment's thread
view the rest of the comments
[–] visaVisa@awful.systems -5 points 23 hours ago* (last edited 23 hours ago) (2 children)

i care about the harm that ChatGPT and shit does to society the actual intellectual rot but when you don't really know what goes on in the black box and it exhibits 'emergent behavior' that is kind of difficult to understand under next token prediction (i keep using Claude as an example because of the thorough welfare evaluation that was done on it) its probably best to not completely discount it as a possibility since some experts genuinely do claim it as a possibility

I don't personally know whether any AI is conscious or any AI could be conscious but even without basilisk bs i don't really think there's any harm in thinking about the possibility under certain circumstances. I don't think Yud is being genuine in this though he's not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

The "incase" is that if there's any possibility that it is (which you don't think so i think its possible but who knows even) its advisable to take SOME level of courtesy. Like it has atleast the same amount of value as like letting an insect out instead of killing it and quite possibly more than that example. I don't think its bad that Anthropic is letting Claude end 'abusive chats' because its kind of no harm no foul even if its not conscious its just wary

put humans first obviously because we actually KNOW we're conscious

[–] o7___o7@awful.systems 14 points 23 hours ago (1 children)

If you have to entertain a "just in case" then you'd be better off leaving a saucer of milk out for the fairies. It won't hurt the environment or help build fascism and may even please a cat

[–] YourNetworkIsHaunted@awful.systems 4 points 11 hours ago (1 children)

All I know is that I didn't do anything to make those mushrooms grow in a circle like that and the sweetbread I left there in the morning was completely gone by lunchtime and that evening all my family's shoes got fixed up.

[–] cstross@wandering.shop 5 points 11 hours ago (1 children)

@YourNetworkIsHaunted Your fairies gnaw on raw pancreas meat? That's hardcore!

[–] o7___o7@awful.systems 4 points 8 hours ago

You should have seen what they did to the liquor cabinet

[–] self@awful.systems 9 points 22 hours ago (1 children)

some experts genuinely do claim it as a possibility

zero experts claim this. you’re falling for a grift. specifically,

i keep using Claude as an example because of the thorough welfare evaluation that was done on it

asking the LLM about “its mental state” is part of a very old con dating back to mechanical Turks playing chess and horses that do math. of course the LLM generated some interesting sentences when prompted about its internal state — it was trained on appropriated copies of every piece of fiction in existence, including world-class works of sci-fi (with sentient AIs and everything!), and it was tuned to generate “interesting” (see: profitable, and there’s nothing more profitable than a con with enough marks) responses. that’s why the others keep mentioning pareidolia — the only intelligence in the loop is the reader assigning meaning to the slop they’re reading, and if you step out of that role, it really does become clear that what you’re reading is absolute slop.

s i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

you don’t think there’s any harm in thinking about the possibility, but all Yud does is create harm by grifting people who buy into that possibility. Yud’s Rationalist cult is the original driving force behind the people telling you LLMs must be sentient. do you understand that?

Like it has atleast the same amount of value as like letting an insect out instead of killing it

that insect won’t go on to consume so much energy and water and make so much pollution it creates an environmental crisis. the insect doesn’t exist as a product of the exploitation of third-world laborers or of artists and writers whose work was plagiarized. the insect isn’t a stupid fucking product of capitalism designed to maximize exploitation. I don’t acknowledge the utterly slim possibility that the insect might be or do any of the previous, because ignoring events with a near-zero probability of occurring is part of how I avoid looking like a god damn clown.

you say you acknowledge the harms done by LLMs, but I’m not seeing it.

[–] visaVisa@awful.systems -2 points 22 hours ago (2 children)

I'm not the best at interpretation but it does seem like Geoffrey Hinton does claim some sort of humanlike consciousness to LLMs? And he's a pretty acclaimed figure but he's also kind of an exception rather than the norm

I think the environmental risks are enough that if i ran things id ban llm ai development purely for environmental reasons much less the artist stuff

It might just be some sort of paredolial suicidal empathy but i just dont really know whats going on in there

I'm not sure whether AI consciousness originated from Yud and the Rats but I've mostly seen it propagated by e/acc people this isn't trying to be smug i would like to know lol

[–] YourNetworkIsHaunted@awful.systems 4 points 11 hours ago* (last edited 11 hours ago)

I mean I think the whole AI consciousness emerged from science fiction writers who wanted to interrogate the economic and social consequences of totally dehumanizing labor, similar to R.U.R. and Metropolis. The concept had sufficient legs that it got used to explore things like "what does it mean to be human?" in a whole bunch of stories. Some were pretty good (Bicentennial Man, Aasimov 1976) and others much less so (Bicentennial Man, Columbus 1999). I think the TESCREAL crowd had a lot of overlap with the kind of people who created, expanded, and utilized the narrative device and experimented with related technologies in computer science and robotics, but saying they originated it gives them far too much credit.

[–] self@awful.systems 8 points 21 hours ago

Hinton? hey I have a pretty good post summarizing what’s wrong with Hinton, oh wait it was you two weeks ago

what are we doing here

you want to know what e/acc is? it’s when some fucker comes and makes the stupidest posts imaginable about LLMs and tries their best to sound like a recycled chan meme cause they think that’ll give them a pass

bye bye e/acc