this post was submitted on 23 Apr 2025
332 points (82.0% liked)
memes
14363 readers
2658 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I once asked chatgpt for the name of an anime that I couldn't remember. I described the whole premise of the anime and then some details of the final episode. Chatgpt says "Stein's gate". I say "no, it's not as famous as that". It then says "Erased", and proceeds to describe it, showing that I was actually very effective in my description of it.
I asked "why did you say Stein's Gate if I described Erased so well?": "you mentioned time travel and Stein's gate is a popular anime about time travel"
"but I also mentioned a lot of other stuff that don't match anything with Stein's gate, like the details about the villain or how the time traveling works"
"yeah my bad I just went by popularity"
"next time, how can I phrase my questions in a way that would make you consider the whole input instead of just using some key information in it?"
"you could have mentioned details about the villain or how the time traveling works and I would have used that to rule out Stein's gate"
"but I did"
"yeah sorry about that, next time try giving me details about the villain or how the time traveling works"
hilarious. just the other day i had a whole conversation with it about how to send feedback to openai, and it was giving me bogus wrong instructions the whole time before it finally said something like they keep changing everything, just email them
I've recently been working with some niche tool that has very little documentation on the web but is open source and has a ton of discussions on public email groups. Chatgpt is sometimes able to figure out what param I need to send for specific stuff in that tool even if there are zero Google matches for the param name, but more often than not it just hallucinates stuff or mention things that no longer exists. I've created the habit of always asking things like "is that right?" or "is that answer up to date?" before even reading the first response from it and it often replies with things like: "no, that param only exists in some other similar tool" or "no that API has been deprecated" and shit like that.
If it were up to me I wouldn't even be using chatgpt at all due to all the time it wasted with random stuff it makes up, but whatever training data openAI used, it surely had more information about the niche stuff I'm working with than the web does at this point - so sometimes it can still save me time too.
i use it exclusively to condense 1000 word text blocks down to 100 words, and it's good enough at that that the time it saves me is worth the (sometimes) slightly lower quality than i could have done myself