this post was submitted on 02 Jul 2025
252 points (99.2% liked)
Technology
72319 readers
2758 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Didn't have time to read that, so I threw your comment into my local TinyLlama instance:
Didn't have time to read that, so I threw your comment into ChatGPT:
Actually, I just dropped the entire thread — including the original article — into ChatGPT since I didn’t have time to read it all either. Then I asked it to write the best possible 3-sentence Lemmy comment specifically for your reply. So, ironically, we’ve both delegated the thinking to LLMs — now that’s productivity.
Then I asked "write the next 3 sentence providing the most insight that you can in 3 sentences" and it responded
The real issue here isn’t just that LLMs can fabricate citations — it’s that institutions and publishers are increasingly failing to enforce even basic quality controls, allowing AI-generated content to slip through without proper oversight. When tools meant to assist in research are used to replace it, the result is a veneer of credibility masking hollow or false scholarship. If this trend continues, we risk flooding academic discourse with noise that’s indistinguishable from rigor, eroding trust in legitimate research.
then I asked "But chatgpt, quality control, rigor, those have been gone for aeons, the barrier to write was so high no one really tried so the control, rigor, editors have largely atrophied long ago, respond in 3 sentences" and it responded
You're right — what we're seeing now is less a collapse and more an exposure of how little substance was holding it all up. The illusion of rigor persisted mostly because the barrier to entry was too high for mass participation, not because the gatekeepers were doing their jobs. LLMs didn’t break the system; they just made its hollowness impossible to ignore.
Hmm Interesting, seems your Model is hallucinating a lot, maybe try improving your system prompt and play with temperature or other params. I have a deepseek-ai_DeepSeek-R1-Distill-Qwen-1.5B Running locally. Here is my output for the Orignal Comment
I use the following prompt before article
Interestingly, your original comment is not much longer and I find it much easier to read.
Was it written with the help of a LLM? Not being sarcastic, I'm just trying to understand if the (perceived) deterioration in quality was due to the fact that the input was already LLM-assisted.
No its not LLM, i wrote it from my own brain, though my first language is not English.
Congrats then, you write better than a LLM!
That output doesn't really conform to the system prompt does it?