this post was submitted on 19 Mar 2025
23 points (89.7% liked)
Technology
1089 readers
38 users here now
A tech news sub for communists
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah LLMs are great at being confidently wrong. It's a little terrifying sometimes how good they are at it, when considering people who don't know they're effectively BSing machines. Not to say the intentional design is for them to BS, but it's sort of a side effect of the fact that they're supposed to continue the conversation believably and they can't possibly get everything right, even if everything had a universal right answer that was agreed on (which is often not the case with humans to begin with).
Similar to a con artist, it becomes more apparent how poor they tend to be on facts when they get into subjects the user knows really well. I've occasionally used an LLM to help jog my memory on something and then I cross-reference it with other sources, but trusting them alone as a sole source on something is always a bad idea.