Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 3 points 1 month ago* (last edited 1 month ago)

The 'energy usage by a single chatgpt' thing gets esp dubious when added to the 'bunch of older models under a trenchcoat' stuff. And that the plan is to check the output of a LLM by having a second LLM check it. Sure the individual 3.0 model might only by 3 whatevers, but a real query uses a dozen of them twice. (Being a bit vague with the numbers here as I have no access to any of those).

E: also not compatible with Altmans story that thanking chatgpt cost millions. Which brings up another issue, a single query is part of a conversation so now the 3 x 12 x 2 gets multiplied even more.

[–] Soyweiser@awful.systems 9 points 1 month ago* (last edited 1 month ago)

Ai is part of Idiocracy. The automatic layoffs machine. For example. And do not think we need more utopian movies like Idiocracy.

[–] Soyweiser@awful.systems 8 points 1 month ago

Too late im already simulating everybody in this thread in my mind.

[–] Soyweiser@awful.systems 9 points 1 month ago* (last edited 1 month ago) (4 children)

Uber but for vitrue signalling (*).

(I joke, because other remarks I want to make will get me in trouble).

*: I know this term is very RW coded, but I don't think it is that bad, esp when you mean it like 'an empty gesture with a very low cost that does nothing except for signal that the person is virtuous.' Not actually doing more than a very small minimum should be part of the definition imho. Stuff like selling stickers you are pro some minority group but only 0.05% of each sale goes to a cause actually helping that group. (Or the rich guys charity which employs half his family/friends, or Mr Beast, or the rightwing debate bro threatening a leftwinger with a fight 'for charity' (this also signals their RW virtue to their RW audience (trollin' and fightin')).

[–] Soyweiser@awful.systems 13 points 1 month ago* (last edited 1 month ago) (1 children)

Well it is a LLM, it is going to make up some strange claims when you ask it about why it was trained. We know LLM output cannot be trusted and it gives answers that are often not true but convenient for the people asking the questions. I'm a bit disappointed so many people who should know better now trust the output.

E: I'm sad that this was all on the guiding prompt level and not that they just dumped more white genocide related training data into the model causing it to collapse.

[–] Soyweiser@awful.systems 12 points 1 month ago (2 children)

Yes, this just makes it worse. 'People are thinking we are a bunch of clowns, and for the ~~record~~ maximum truthseeking that is a lie, we are amateurs and clowns. Anyway we are now going to post some technically true but not relevant to the incident information, and as you brought up the highly debated subject of white genocide in South Africa we are going to give all our white South African employees giftcards.'

[–] Soyweiser@awful.systems 9 points 1 month ago

Building a gilded capitalist megafortress within communist mortar range doesn't seem the wisest thing to do. But sure buy another big statue clearly signalling 'capitalists are horrible and shouldn't be trusted with money'

[–] Soyweiser@awful.systems 3 points 1 month ago (2 children)

Re the blocking of fake useragents, what people could try is see if there are things older useagents do (or do wrong) which these do not. I heard of some companies doing that. (Long ago I also heard of somebody using that to catch mmo bots in a specific game. There was a packet that if the server send it to a legit client, the client crashed, a bot did not). I'd assume the specifics are treated as secret just because you don't want the scrapers to find out.

[–] Soyweiser@awful.systems 9 points 1 month ago

I'm GamerSexual, and that my dear Sir is no Gamer.

[–] Soyweiser@awful.systems 7 points 1 month ago* (last edited 1 month ago) (1 children)

Yeah with PG it was 'who are you saying this for, you cannot be this dense' (Esp considering the shit he said about wokeness earlier this year).

[–] Soyweiser@awful.systems 6 points 1 month ago (4 children)

Even more signs that sneering might soon be profitable, or at least exploitable. Look who is pivoting to sneer

[–] Soyweiser@awful.systems 13 points 1 month ago (7 children)

Finally, a non-sexy picture.

view more: ‹ prev next ›