this post was submitted on 09 Aug 2025
171 points (83.3% liked)
Privacy
40710 readers
395 users here now
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
much thanks to @gary_host_laptop for the logo design :)
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Theres evidence they use the very popular tool cursor that many devs and large companies use.
LLM is avoided by many experienced developers and competent medium and small companies.
Tools like cursor are sometimes ok for small things like people learning, or to generate boilerplate.
But it is seen by some as a warning flag when it’s in source code for larger projects
This comment is meaningless.
What red flags? Why is it a red flag is an be experienced developer used cursor on a larger project? Put it into words.
It's very time consuming to detect and correct the small mistakes that LLMs make. Beyond one or two lines of code, it becomes much more time consuming to correct the multitude of subtle mistakes vs coding it myself. I use code completion that comes with my IDE, but that is programmatic completion, not LLM, and is much, much more accurate and in smaller chunks that are easy to verify at a glance. I've never known any experienced developers who have had a different experience. LLMs can be good for getting a general idea of how to code something in a new language or framework I've never touched before and more to help find actual examples rather than use the code directly in the IDE, but if I were to use LLM code directly that would be in a test project, never, ever in production code. I would never write production code in a language I've never used before with or without an LLM's "help".
When adding code this way, one needs to look it over and read to fix bugs or things that are not quite correct; stats show experienced developers often are faster not using this approach because debugging existing code takes longer than writing it fresh.
The speed is not the issue.
What matters is sometimes subtle bugs are introduced that require several people to catch. If at all. These issues might be unique to the Llm.
Having large sections of generated code offers the possibility of hard to find problems.
Some codes are more sensitive to such issues.
The details of how the code was added, and what it does, may render this issue harmless or very much a problem to be avoided.
This is why it’s a flag and not a condemnation
No wAy something popular and megacorp-embraced could be bad. Asbestos, lead pipes, 2-digit dates, NFTs, opiates, sub-prime lending, algorithmic content, pervasive surveillance, etc must have just been flukes.
All technology weilds a double edged sword.
Sure, but with all the mistakes I see LLMs making in places where professionals should be quality checking their work (lawyers, judges, internal company email summaries, etc) it gives me pause considering this is a privacy and security focused company.
It's one thing for AI to hallucinate cases, and another entirely to forget there's a difference between
=
and==
when the AI bulk generates code. One slip up and my security and privacy could be compromised.You're welcome to buy in to the AI hype. I remember the dot com bubble.
We've been using 'AI' for quite some time now, well before the advent of AI Rice Cookers. It's really not that new.
I use AI when I master my audio tracks. I am clinically deaf and there are some frequency ranges that I can't hear well enough to master. So I lean heavily on AI. I use AI for explaining unfamiliar code to me. Now, I don't run and implement such code in a production environment. You have to do your due diligence. If you searched for the same info in a search engine, you still have to do your due diligence. Search engine results aren't always authoritative. It's just that Grok is much faster at searching and in fact, lists the sources it pulled the info from. Again, much faster than engaging a search engine and slogging through site after site.
If you want to trade accuracy for speed, that's your prerogative.
AI has its uses. Transcribing subtitles, searching images by description, things like that. But too many times, I've seen AI summaries that, if you read the article the AI cited, it can be flatly wrong on things.
What's the point of a summary that doesn't actually summarize the facts accurately?
Just because I find an inaccurate search result does not mean DDG is useless. Never trust, always verify.
There it is. The bold-faced lie.
"I don't blindly trust AI, I just ask it to summarize something, read the output, then read the source article too. Just to be sure the AI summarized it properly."
Nobody is doing double the work. If you ask AI a question, it only gets a vibe check at best.
Hey there BluescreenOfDeath, sup. Good to meet you. My name is 'Nobody'.
It's easy to post on a forum and say so.
Maybe you even are actually asking AI questions and researching whether or not it's accurate.
Perhaps you really are the world's most perfect person.
But even if that's true, which I very seriously doubt, then you're going to be the extreme minority. People will ask AI a question, and if they like the answers given, they'll look no further. If they don't like the answers given, they'll ask the AI with different wording until they get the answer they want.
You can't practically "trust but verify" with LLMs. I task an LLM to summarize an article. If I want to check its work, I have to go and read that whole article myself. The checking takes as much time as just writing the summary myself. And this is even worse with code, as you have to be able to deconstruct the AI's code and figure out its internal logic. And by the time you've done that, it's easier to just make the code yourself.
It's not that you can't verify the work of AI. It's that if you do, you might as well just create the thing yourself.
As the other guy said, double edged sword. Asbestos was fucking great, and is still used for certain things because it's great. The poor interaction with human biology was the other side of the sword.
An aside, I just pulled a fuck load of vinyl asbestos tile out of a house a year ago and while it wasn't actually all that dangerous because I took proper precautions it's sorta scary anyway cause of the poor interaction thing.