this post was submitted on 21 Jul 2025
699 points (98.6% liked)

Technology

296 readers
200 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Cruxifux 46 points 6 days ago (2 children)

“I panicked” had me laughing so hard. Like implying that the robot can panic, and panicking can make it fuck shit up when flustered. Idk why that’s so funny to me.

[–] Feathercrown@lemmy.world 27 points 6 days ago (1 children)

It's interesting that it can "recognize" the actions as clearly illogical afterwards, as if made by someone panicking, but will still make them in the first place. Or, a possibly funnier option, it's mimicking all the stories of people panicking in this situation. Either way, it's a good lesson to learn about how AI operates... especially for this company.

[–] abbotsbury@lemmy.world 10 points 6 days ago (2 children)

It’s interesting that it can “recognize” the actions as clearly illogical afterwards, as if made by someone panicking, but will still make them in the first place

Yeah I don't use LLMs often, but use ChatGPT occasionally, and sometimes when asking technical/scientific questions it will have glaring contradictions that are just completely wrong for no reason. One time when this happened I told it that it fucked up and to check it's work, and it corrected itself immediately. I tried again to see if I could get it to overcorrect or something, but it didn't go for it.

So as weird as it sounds, I think adding "also make sure to always check your replies for logical consistency" to its base prompt would improve things.

[–] Swedneck@discuss.tchncs.de 9 points 6 days ago

and just like that we're back to computers doing precisely what we tell them to do, nothing more and nothing less.

one day there's gonna be a sapient LLM and it'll just be a prompt of such length that it qualifies as a full genome

[–] Feathercrown@lemmy.world 2 points 6 days ago

This unironically works, it's basically the same reason why chain-of-reasoning models produce better outputs

[–] MotoAsh@lemmy.world 2 points 6 days ago

Nah, it's hilarious because these models literally do not think or feel. They cannot panic, so it is hilarious. Hilariously inept of the execs to give an LLM these kinds of permissions.