this post was submitted on 27 Jul 2025
91 points (97.9% liked)

Technology

656 readers
58 users here now

Tech related news and discussion. Link to anything, it doesn't need to be a news article.

Let's keep the politics and business side of things to a minimum.

Rules

No memes

founded 2 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Perspectivist@feddit.uk 1 points 3 days ago* (last edited 3 days ago)

This isn’t a failure of the model - it’s a misunderstanding of what the model is. ChatGPT is a tool, not a licensed practitioner. It has one capability: generating language. That sometimes produces correct information as a side effect of the data it was trained on, but there is no understanding, no professional qualification, and no judgment behind it.

If you ask it whether it’s qualified to act as a therapist, it will tell you no. If you instruct it to role-play as one, it will however do that - because following instructions is the thing it’s designed to do. Complaining that a language model behaves like a language model, and then demanding more guardrails to stop people from using it badly, is just outsourcing common sense.

There’s also this odd fixation on Sam Altman as if he’s hand-crafting the bot’s behavior in real time. It’s much closer to an open-ended, organic system that reacts to input than a curated service. What you get out of it depends entirely on what you put in.