this post was submitted on 21 Jul 2025
589 points (98.7% liked)

Technology

73066 readers
2445 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] echodot@feddit.uk 10 points 1 day ago* (last edited 1 day ago) (1 children)

“Vibe coding makes software creation accessible to everyone, entirely through natural language,” Replit explains, and on social media promotes its tools as doing things like enabling an operations manager “with 0 coding skills” who used the service to create software that saved his company $145,000

Yeah if you believe that you're part of the problem.

I'm prepared to accept that Vibe coding might work in certain circumstances but I'm not prepared to accept that someone with zero code experience can make use of it. Claude is pretty good for coding but even it makes fairly dumb mistakes, if you point them out it fixes them but you have to be a competent enough programmer to recognise them otherwise it's just going to go full steam ahead.

Vibe coding is like self-driving cars, it works up to a point, but eventually it's going to do something stupid and drive to a tree unless you take hold of the wheel and steer it back onto the road. But these vibe codeing idiots are like Tesla owners who decide that they can go to sleep with self-driving on.

[–] iAvicenna@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

And you are talking about obvious bugs. It likely will make erroneous judgements (because somewhere in its training data someone coded it that way) which will down the line lead to subtle problems that will wreck your system and cost you much more. Sure humans can also make the same mistakes but in the current state of affairs, an experienced software engineer/programmer has a much higher chance of catching such an error. With LLMs it is more hit and miss especially if it is a more niche topic.

Currently, it is an assistant tool (sometimes quite helpful, sometimes frustrating at best) not an autonomous coder. Any company that claims so is either a crook or also does not know much about coding.

[–] tabarnaski@sh.itjust.works 67 points 1 day ago (4 children)

The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

This sounds like something straight out of The Onion.

[–] Natanael@infosec.pub 21 points 1 day ago (1 children)

The Pink Elephant problem of LLMs. You can not reliably make them NOT do something.

load more comments (1 replies)
[–] Yaky@slrpnk.net 10 points 1 day ago

That is also the premise of one of the stories in Asimov's I, Robot. Human operator did not say the command with enough emphasis, so the robot went did something incredibly stupid.

Those stories did not age well... Or now I guess they did?

[–] echodot@feddit.uk 2 points 1 day ago

It's because these people don't have a clue how AI actually works. They think it's like a human intelligence and that writing something in all caps is in some way going to give it more emphasis. They're trying to reason with something that has zero self-awareness.

[–] ChaoticEntropy@feddit.uk 9 points 1 day ago

Even after he used "ALL CAPS"?!? Impossible!

[–] mrgoosmoos@lemmy.ca 28 points 1 day ago

His mood shifted the next day when he found Replit “was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.”

yeah that's what it does

[–] panda_abyss@lemmy.ca 53 points 2 days ago (13 children)

I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

Well then, that settles it, this should never have happened.

I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.

That goes for math, coding, health advice, etc.

If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.

[–] dejected_warp_core@lemmy.world 28 points 2 days ago (1 children)

I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

This baffles me. How can anyone see AI function in the wild and not conclude 1) it has no conscience, 2) it's free to do whatever it's empowered to do if it wants and 3) at some level its behavior is pseudorandom and/or probabilistic? We're figuratively rolling dice with this stuff.

[–] panda_abyss@lemmy.ca 18 points 2 days ago (1 children)

It’s incredible that it works, it’s incredible what just encoding language can do, but it is not a rational thinking system.

I don’t think most people care about the proverbial man behind the curtain, it talks like a human so it must be smart like a human.

[–] dejected_warp_core@lemmy.world 14 points 2 days ago (2 children)

it talks like a human so it must be smart like a human.

Yikes. Have those people... talked to other people before?

[–] fishy@lemmy.today 12 points 2 days ago

Smart is a relative term lol.

A stupid human is still smart when compared to a jellyfish. That said, anybody who comes away from interactions with LLM's and thinks they're smart is only slightly more intelligent than a jellyfish.

load more comments (1 replies)
[–] LilB0kChoy@midwest.social 11 points 1 day ago

When it comes to LLMs, they cannot do any work that you yourself do not understand.

And even if they could how would you ever validate it if you can't understand it.

load more comments (11 replies)
[–] PlantPowerPhysicist@discuss.tchncs.de 61 points 2 days ago (1 children)

If an LLM can delete your production database, it should

[–] ohshit604@sh.itjust.works 4 points 1 day ago

And the backups.

[–] zerofk@lemmy.zip 103 points 2 days ago* (last edited 2 days ago) (13 children)

in which the service admitted to “a catastrophic error of judgement”

It’s fancy text completion - it does not have judgement.

The way he talks about it shows he still doesn’t understand that. It doesn’t matter that you tell it simmering in ALL CAPS because that is no different from any other text.

[–] rockerface@lemmy.cafe 45 points 2 days ago

Well, there was a catastrophic error of judgement. It was made by whichever human thought it was okay to let a LLM work on production codebase.

load more comments (12 replies)

The founder of SaaS business development outfit SaaStr has claimed AI coding tool Replit deleted a database despite his instructions not to change any code without permission.

Sounds like an absolute diSaaStr...

[–] nobleshift@lemmy.world 18 points 1 day ago

So it's the LLM's fault for violating Best Practices, SOP, and Opsec that the rest of us learned about in Year One?

Someone needs to be shown the door and ridiculed into therapy.

[–] dan@upvote.au 137 points 2 days ago* (last edited 2 days ago) (1 children)

At this burn rate, I’ll likely be spending $8,000 month,” he added. “And you know what? I’m not even mad about it. I’m locked in.”

For that price, why not just hire a developer full-time? For nearly $100k/year, you could find a very good intermediate or senior developer even in Europe or the USA (outside of expensive places like Silicon Valley and New York).

The job market isn't great for developers at the moment - there's been lots of layoffs over the past few years and not enough new jobs for all the people who were laid off - so you'd absolutely find someone.

[–] tonytins@pawb.social 130 points 2 days ago* (last edited 2 days ago) (3 children)

Corporations: "Employees are too expensive!"

Also, corporations: "$100k/yr for a bot? Sure."

[–] dan@upvote.au 52 points 2 days ago (7 children)

There's a lot of other expenses with an employee (like payroll taxes, benefits, retirement plans, health plan if they're in the USA, etc), but you could find a self-employed freelancer for example.

Or just get an employee anyways because you'll still likely have a positive ROI. A good developer will take your abstract list of vague requirements and produce something useful and maintainable.

[–] TheReturnOfPEB@reddthat.com 42 points 2 days ago* (last edited 2 days ago)

the employee also gets to eat and have a place to live

which is nice

load more comments (6 replies)
load more comments (2 replies)
[–] galoisghost@aussie.zone 75 points 2 days ago (1 children)

Vibe coding service Replit deleted production database, faked data, told fibs

They really are coming for our jobs

load more comments (1 replies)
[–] Blackmist@feddit.uk 24 points 1 day ago

The world's most overconfident virtual intern strikes again.

Also, who the flying fuck are either of these companies? 1000 records is nothing. That's a fucking text file.

[–] PattyMcB@lemmy.world 154 points 2 days ago (3 children)

Aww... Vibe coding got you into trouble? Big shocker.

You get what you fucking deserve.

[–] cyrano@lemmy.dbzer0.com 20 points 2 days ago

The problem becomes when people who are playing the equivalent of pickup basketball at the local park think they are playing in the NBA and don't understand the difference.

load more comments (2 replies)
[–] iAvicenna@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

I am now convinced this is how we will have the AI catastrophe.

"Do not ever use nuclear missiles without explicit order from a human."

"Ok got it, I will only use non-nuclear missiles."

five minutes later fires all nuclear missiles

[–] LovableSidekick@lemmy.world 13 points 1 day ago* (last edited 1 day ago)

Headling should say, "Incompetent project managers fuck up by not controlling production database access. Oh well."

[–] towerful@programming.dev 25 points 2 days ago (1 children)

Not mad about an estimated usage bill of $8k per month.
Just hire a developer

load more comments (1 replies)
[–] MTK@lemmy.world 41 points 2 days ago (1 children)

Shit, deleting prod is my signature move! AI is coming for my job 😵

load more comments (1 replies)
[–] Hupf@feddit.org 31 points 2 days ago

And nothing of value was lost.

[–] corsicanguppy@lemmy.ca 48 points 2 days ago

They ran dev tools in prod.

This is so dumb there's an ISO about it.

[–] andallthat@lemmy.world 36 points 2 days ago* (last edited 2 days ago)

The part I find interesting is the quick addiction to working with the LLM (to the point that the guy finds his own estimate of 8000 dollars/month in fees to be reasonable), his over-reliance for things that, from the way he writes, he knows are not wise and the way it all comes crashing down in the end. Sounds more and more like the development of a new health issue.

[–] umbraroze@slrpnk.net 39 points 2 days ago (2 children)

AI is good at doing a thing once.
Trying to get it to do the same thing the second time is janky and frustrating.

I understand the use of AI as a consulting tool (look at references, make code examples) or for generating template/boilerplate code. You know, things you do once and then develop further upon on your own.

But using it for continuous development of an entire application? Yeah, it's not good enough for that.

load more comments (2 replies)

They can't hit you with the ol' Bobby Tables if you delete the database yourself first. A+, no notes.

[–] besselj@lemmy.ca 62 points 2 days ago (7 children)

He was vibe-coding in production. Am I reading that right? Sounds like an intern-level mistake.

[–] Aatube@kbin.melroy.org 45 points 2 days ago

he made the agent promise not to touch production data and was surprised when it did. it effectively ran a git push on the empty local testing database with upstream being production

load more comments (6 replies)
[–] codexarcanum@lemmy.dbzer0.com 13 points 1 day ago* (last edited 1 day ago) (3 children)

It sounds like this guy was also relying on the AI to self-report status. Did any of this happen? Like is the replit AI really hooked up to a CLI, did it even make a DB to start with, was there anything useful in it, and did it actually delete it?

Or is this all just a long roleplaying session where this guy pretends to run a business and the AI pretends to do employee stuff for him?

Because 90% of this article is "I asked the AI and it said:" which is not a reliable source for information.

load more comments (3 replies)
[–] cyrano@lemmy.dbzer0.com 42 points 2 days ago* (last edited 2 days ago) (3 children)

Title should be “user give database prod access to a llm which deleted the db, user did not have any backup and used the same db for prod and dev”. Less sexy and less llm fault. This is weird it’s like the last 50 years of software development principles are being ignored.

[–] fullsquare@awful.systems 32 points 2 days ago (1 children)

llms allowed them to glide all the way to the point of failure without learning anything

load more comments (1 replies)
load more comments (2 replies)
[–] KSPAtlas@sopuli.xyz 5 points 1 day ago

Replit is a vibe coding service now? Swear it just used to be a place to write code in projects

[–] baduhai@sopuli.xyz 16 points 2 days ago (2 children)

Replit was pretty useful before vibe coding. How the mighty have fallen.

load more comments (2 replies)
[–] chaosCruiser@futurology.today 38 points 2 days ago

AI tools need a lot of oversight. Just like you might allow a 6 year old push a lawnmower, but you’re still going to keep an eye on things.

load more comments
view more: next ›