this post was submitted on 10 Aug 2025
98 points (99.0% liked)

AI - Artificial intelligence

80 readers
18 users here now

AI related news and articles.

Rules:

founded 2 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pulsewidth@lemmy.world 8 points 1 day ago (1 children)

And there it is, the "he's using ChatGPT wrong" defence, as discussed in the post.

[–] otacon239@lemmy.world -1 points 1 day ago (2 children)

While I do admit OpenAI (and all the others) are overselling this like crazy, being strictly anti-AI just isn’t going to do me any favors.

When I need a language model, I use a language model. I’m not going to ask my English teacher a math question. There are times when it’s useful and times when it’s not, like any other tool.

Every technology that’s come before it has this double wave of haters and lovers that pull on the extremes of embrace or despise. I’m just taking the middle road and ignoring the hype either way.

I’m just grabbing the hammer when I need it.

[–] tyler@programming.dev 6 points 1 day ago (1 children)

but the problem is that you understand that but the majority of people do not. They're not told it's an LLM and even if they are they do not understand what that means. They call it AI. They think it's smart. So the only way to get them to stop using it for every goddamn thing is by showing how problematic it is.

[–] otacon239@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

I tell almost everyone I meet how dangerous it is. It’s one of my biggest talking points with friends. I am very well aware of how much it is a problem. But as someone who understands it, I personally feel comfortable using it.

Notice my choice of language throughout. I’m not applying this logic to the general public.

[–] Perspectivist@feddit.uk 5 points 1 day ago

You’re right - this is a mix of big tech hatred combined with false expectations about what “AI” means and what an LLM should be able to do.

I think one big “issue” with current LLMs is that they’re too good. The first time I saw text from one, it was so bad it was hilarious - perfectly polished sentences that were total nonsense. It was obvious this thing was just talking, and all of it was gibberish. Now the answers make sense, and they’re often factually correct as well, so I can’t really blame people for forgetting that under the hood it’s still the same gibberish machine. They start applying standards to it as if it were an early AGI, and when it fails a basic task like this, it feels like they’ve been lied to (which, to be fair, they have) - when in reality, they’re just trying to drive screws with a hammer.