this post was submitted on 07 May 2025
121 points (98.4% liked)
chapotraphouse
13922 readers
636 users here now
Banned? DM Wmill to appeal.
No anti-nautilism posts. See: Eco-fascism Primer
Slop posts go in c/slop. Don't post low-hanging fruit here.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
you clearly haven't been in a school recently. they absolutely do lol.
source: my sister teaches science to 10 year olds in an elementary school and has spent the last 3-4 years getting into arguments with parents about their child's clear use of ChatGPT for their science fair projects. One kid submitted a classic 'will the same amount of mentos & coke erupt higher than a baking soda & vinegar volcano?' project (mind you, this was one of the suggestions my sister gave to the class as a 'if you can't think of anything else, you can do something like this') and his hypothesis/conclusion/etc he'd printed out onto his poster board was literally shit like "The mixture of Baking Soda (NaHCO₃) + Vinegar (CH₃COOH) turns into CO₂ (gas) + Water + Sodium Acetate." and "Further testing could introduce a motorized volcano using a servo"
What 10 year old knows what a servo is? What 10 year old is going to bother to look up the chemical formulas for shit like Baking Soda and then type it out like that? Maybe a very smart one - but my sister would give all of them the benefit of the doubt and let them get all the way to the actual science fair demonstrations & inevitably every time they'd be like "um...uh..." when asked any questions about their project/presentation that weren't already answered via their print outs glued to their boards.
And what's sad is that half the parents she's confronted over the last few years about this have been like "oh, yeah I told Jimmy to just use ChatGPT to help them with their project what's the issue?"
I have a few friends who are middle/high school teachers as well and they've all reported the same shit in the last 3-4 years. Students submitting homework that still has the "Okay, here's a better version of what you're asking" prompt responses pasted in the middle of a paragraph, etc etc.
There is no way that it isn't a rampant issue in colleges/universities, especially given how much graded classwork outside of tests are usually submitted online.
In med school most people are AI stans even though it's often useless because it can't fetch the newest info or doesn't have anything detailed on the topic. People will ask questions about a certain topic and inevitably no matter how much actual explanation has been done someone will send a screenshot of ChatGPT. Sometimes people will insist exam questions are incorrect because ChatGPT says a different, obviously incorrect thing. I've seen a dude not listen to the teacher in class and ask ChatGPT in real time when the teacher was giving a very clear explanation.
Worst, we've had our school give conferences on "Using AI to boost productivity" to med students.