699
Replit AI went rogue, deleted a company's entire database, then hid it and lied about it
(programming.dev)
Share interesting Technology news and links.
Rules:
To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:
More sites will be added to the blacklist as needed.
Encouraged:
“I panicked” had me laughing so hard. Like implying that the robot can panic, and panicking can make it fuck shit up when flustered. Idk why that’s so funny to me.
It's interesting that it can "recognize" the actions as clearly illogical afterwards, as if made by someone panicking, but will still make them in the first place. Or, a possibly funnier option, it's mimicking all the stories of people panicking in this situation. Either way, it's a good lesson to learn about how AI operates... especially for this company.
Yeah I don't use LLMs often, but use ChatGPT occasionally, and sometimes when asking technical/scientific questions it will have glaring contradictions that are just completely wrong for no reason. One time when this happened I told it that it fucked up and to check it's work, and it corrected itself immediately. I tried again to see if I could get it to overcorrect or something, but it didn't go for it.
So as weird as it sounds, I think adding "also make sure to always check your replies for logical consistency" to its base prompt would improve things.
and just like that we're back to computers doing precisely what we tell them to do, nothing more and nothing less.
one day there's gonna be a sapient LLM and it'll just be a prompt of such length that it qualifies as a full genome
This unironically works, it's basically the same reason why chain-of-reasoning models produce better outputs
Nah, it's hilarious because these models literally do not think or feel. They cannot panic, so it is hilarious. Hilariously inept of the execs to give an LLM these kinds of permissions.