this post was submitted on 21 Jul 2025
699 points (98.6% liked)

Technology

296 readers
318 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Pandantic@midwest.social 72 points 1 week ago (2 children)
[–] fox2263@lemmy.world 30 points 1 week ago

It’s been trained on Junior Devs posting on stack overflow

[–] jaybone@lemmy.zip 11 points 1 week ago (2 children)

How does an AI panic?

And that’s a quality I look for in a developer. If something goes horribly wrong do you A) immediately contact senior devs and stakeholders, call for a quick meeting to discuss options with area experts? Or B) Panic, go rogue, take hasty ill advised actions on your own during a change freeze without approval or supervision?

[–] WraithGear@lemmy.world 12 points 1 week ago

it doesn’t. it after the fact evaluates the actions, and assumes an intent that would get the highest rated response from the user, based on its training and weights.

now humans do sorta the same thing, but llm’s do not appropriately grasp concepts. if it weighed it diffrent it could just as easily as said that it was mad and did it out of frustration. but the reason it did that was in its training data at some point connected to all the appropriate nodes of his prompt is the knowledge that someone recommended formatting the server. probably as a half joke. again llm’s do not have grasps of context

[–] drosophila@lemmy.blahaj.zone 7 points 1 week ago

Its trained to mimic human text output and humans panic sometimes, there are no other reasons for it.

Actually even that isn't quite right. In the model's training data sometimes there were "delete the database" commands that appeared in a context that vaguely resembled the previous commands in its text window. Then, in its training data when someone was angrily asked why they did something a lot of those instances probably involved "I panicked" as a response.

LLMs cannot give a reason for their actions when they are not capable of reasoning in the first place. Any explanation for a given text output will itself just be a pattern completion. Of course humans do this to some degree too, most blatantly when someone asks you a question while you're distracted and you answer without even remembering what your response was, but we are capable of both pattern completion and logic.