this post was submitted on 21 Jul 2025
699 points (98.6% liked)

Technology

296 readers
207 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] notabot@piefed.social 84 points 6 days ago (5 children)

Assuming this is actually real, because I want to believe noone is stupid enough to give an LLM access to a production system, the outcome is embarasing, but they can surely just roll back the changes to the last backup, or the checkpoint before this operation. Then I remember that the sort of people who let an LLM loose on their system probably haven't thought about things like disaster recovery planning, access controls or backups.

[–] AnUnusualRelic@lemmy.world 62 points 6 days ago (2 children)

"Hey LLM, make sure you take care of the backups "

"Sure thing boss"

[–] notabot@piefed.social 42 points 6 days ago

LLM seeks a match for the phrase "take care of" and lands on a mafia connection. The backups now "sleep with the fishes".

[–] pulsewidth@lemmy.world 22 points 6 days ago (1 children)

Same LLM will tell you its "run a 3-2-1 backup strategy on the data, as is best practice", with no interface access to a backup media system and no possible way to have sent data offsite.

[–] Swedneck@discuss.tchncs.de 15 points 6 days ago

there have to be multiple people now who think they've been running a business because the AI told them it was taking care of everything, as absolutely nothing was happening

[–] pulsewidth@lemmy.world 28 points 6 days ago (1 children)

I think you're right. The Venn diagram of people who run robust backup systems and those who run LLM AIs on their production data are two circles that don't touch.

[–] Asswardbackaddict@lemmy.world 2 points 6 days ago (2 children)

Working on a software project. Can you describe a robust backup system? I have my notes and code and other files backed up.

[–] pulsewidth@lemmy.world 3 points 5 days ago

Sure, but it's a bit of an open-ended question because it depends on your requirements (and your clients' potentially), and your risk comfort level. Sorry in advance, huge reply.

When you're backing up an production environment it's different to just backing up personal data so you have to consider stateful-backups of the data across the whole environment - to ensure for instance that an app's config is aware of changes made recently on the database, else you may be restoring inconsistent data that will have issues/errors. For a small project that runs on a single server you can do a nightly backup that runs a pre-backup script to gracefully stop all of your key services, then performs backup, then starts them again with a post-backup script. Large environments with multiple servers (or containers/etc) or sites get much more complex.

Keeping with the single server example - those backups can be stored on a local NAS, synced to another location on schedule (not set to overwrite but to keep multiple copies), and ideally you would take a periodical (eg weekly, whatever you're comfortable with) copy off to a non-networked device like a USB drive or tape, which would also be offsite (eg carried home or stored in a drawer in case of a home office). This is loosely the 3-2-1 strategy is to have at least 3 copies of important data in 2 different mediums ('devices' is often used today) with 1 offsite. It keeps you protected from a local physical disaster (eg fire/burglary), a network disaster (eg virus/crypto/accidental deletion), and has a lot of points of failure so that more than one thing has to go wrong to cause you serious data loss.

Really the best advice I can give is to make a disaster recovery plan (DRP), there are guides online, but essentially you plot out the sequence it would take you to restore your environment to up-and-running with current data, in case of a disaster that takes out your production environment or its data.

How long would it take you to spin up new servers (or docker containers or whatever) and configure them to the right IPs, DNS, auth keys and so on? How long to get the most recent copy of your production data back on that newly-built system and running? Those are the types of questions you try to answer with a DRP.

Once you have an idea of what a recovery would look like and how long it would take, it will inform how you may want to approach your backup. You might decide that file-based backups of your core config data and database files or other unique data is not enough for you (because the restore process may have you out of business for a week), and you'd rather do a machine-wide stateful backup of the system that could get you back up and running much quicker (perhaps a day).

Whatever you choose, the most important step (that is often overlooked) is to actually do a test recovery once you have a backup plan implemented and DR plan considered. Take your live environment offline and attempt your recovery plan. It's really not so hard for small environments, and can make you find all sorts of things you missed in the planning stage that need reconsideration. 'Much less stressful when you find those problems and you know you actually have your real environment just sitting waiting to be turned back on. But like I said it's all down to how comfortable you are with risk, and really how much of your time you want to spend considering backups and DR.

[–] Winthrowe@lemmy.ca 2 points 5 days ago

Look up the 3-2-1 rule for guidance on an “industry standard” level of protection.

[–] AngryPancake@sh.itjust.works 17 points 6 days ago (1 children)

But with ai we don't need to pay software engineers anymore! Think of all the savings!

[–] notabot@piefed.social 10 points 6 days ago (1 children)

Without a production DB we don't need to pay software engineers anymore! It's brilliant, the LLM has managed to reduce the company's outgoings to zero. That's bound to delight the shareholders!

[–] MoonRaven 3 points 6 days ago

Without a production db, we don't need to host it anymore. Think of those savings!

[–] BigDanishGuy@sh.itjust.works 11 points 6 days ago (1 children)

I want to believe noone is stupid enough to give an LLM access to a production system,

Have you met people? They're dumber than a sack of hammers.

people who let an LLM loose on their system probably haven't thought about things like disaster recovery planning, access controls or backups.

Oh, I see, you have met people...

I worked with a security auditor, and the stories he could tell. "Device hardening? Yes, we changed the default password" and "whaddya mean we shouldn't expose our production DB to the internet?"

[–] notabot@piefed.social 11 points 6 days ago

I once had the "pleasure" of having to deal with a hosted mailing list manager for a client. The client was using it sensibly, requiring double opt-in and such, and we'd been asked to integrate it into their backend systems.

I poked the supplier's API and realised there was a glaring DoS flaw in the fundamental design of it. We had a meeting with them where I asked them about fixing that, and their guy memorably said "Security? No one's ever asked about that before...", and then suggested we phone them whenever their system wasn't working and they'd restart it.

[–] burgerpocalyse@lemmy.world 11 points 6 days ago

you best start believing in stupid stories, youre in one!