this post was submitted on 02 Jul 2025
114 points (99.1% liked)
chapotraphouse
13918 readers
774 users here now
Banned? DM Wmill to appeal.
No anti-nautilism posts. See: Eco-fascism Primer
Slop posts go in c/slop. Don't post low-hanging fruit here.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don’t really understand how to explain to people who don’t understand that chatgpt is the same as when your phone guesses what you’re going to say next word, but on turbo mode. If they still don’t get it after that then I don’t know what to do to explain it further. Just fucking get it man!!
I try to tell them that the machine is just calculating what the next word is to follow the previous word. It doesnt understand the context of what it's saying, only that these words fit together right.
At risk of sounding ignorant...
There has to be more to it than that, right? I mean these tools can write working code in whatever language I need, using the libraries I specify, and it just spits out this code in seconds. The code is 90% of the way there.
LLMs can also read charts and correctly assess what's going on, can create stock trading strategies using recent data, can create recipes that work implying some level of understanding of how to cook, etc. It's kinda scary how much these things can do. Now that my job is training these models I see how far they've come in just coding, and they will 100% replace a LOT of developers.
We've had opposite experiences training these things.
I've been shocked how little they've advanced and how absolutely shit they are. I'm training them in math and it's fucking soul sucking misery. They're less capable than Wolfram Alpha was 20 years ago. The mistakes they make are so fucking bad, holy shit. I had one the other day try to use Heron's Formula for the area of a triangle on a problem where there were no triangles!
These things are crap and they aren't getting better.
Because the llms have been trained on however many cured data sets with mostly correct info. It sees how many times a phrase has been used in relation to other phrases, calculates the probability of if this is the correct output, then gambles on a certain preprogrammed risk tolerance, and spits out the output. Of course the software engineers will polish it up with barriers to keep it within certain boundaries,
But the key thing is that the llm doesnt understand the fundamental concepts of what you're asking it.
I'm not a programmer, so I could be misunderstanding the overall process, but from what I've seen on how llms work and are trained, AI makes a very good attempt of what you almost wanted. I don't how quickly AI will progress, but for now I I just see it as an extremely expensive party trick
That party trick is shipping code and is good enough to replace thousands of developers at Microsoft and other companies. Maybe that says something about how common production programming problems are. A lot of business code boils down to putting things in and pulling things out of databases or moving data around via API calls and other communication methods. This tool handles that kind of work with ease.
Being able to emulate patterns does not actually indicate some sort of higher level of understanding. You aren't going to get innovative new recipes, they are either just paraphrasing what they have read many people describe or they are cobbling together words.
That may have been a bad example because for recipes it could just search the web and infer that vegetables go with olive oil for a stir fry. Where it's impressed me so far is in taking a piece of complex code and being able to refactor it, add features, write unit tests, and write up development plans. That text doesn't exist. It has to do some form of reasoning to interpret the code and come up with solutions for that particular problem.
Syntax is syntax. I think from the standpoint of making a computer do something, it's really not that different from language processing. That, and just like when you ask it to make a new recipe or whatever else, it is liable to make up something nonsensical and fail to identify the problem unless you spell it out first.
No, there really isn't. You're just pigging backing off of the exploited labor of working class engineers and enjoying the luxury of living away from the blood soaked externalities that make your chatbot sing.
If AI actually did what you think it does then why would the capitalist class support it? A computer program that is the might of millions of workers? How would the control of the capitalist class continue to exist?
Or the more reasonable explanation that like smartphones and crypto, there exists a very lucrative profit incentive for the capitalist leach to create profit margins out of thin air. Westerners are trained to overconsume so this doesn't come as a suprise.
Because server farms are cheaper than hiring developers, artists, writers, etc.? Capitalists don't care about the environmental impacts as long as their bottom line isn't affected.
This technology is killing jobs. Thousands are being laid off at Microsoft this month on top of layoffs at lots of other tech companies. The field I went to college to learn is cooked. There's already thousands of over qualified people applying to the few jobs that are left. This is a way bigger deal than Crypto and another way for the owning class to hoard more wealth for themselves at the expense of us working class folks.
what do you say to people that are like "that's how the human brain works too"? i mean i know that's BS but I see that response all the time
I feel really grateful that I was exposed to Markov chain bots on irc back in the day as it was a powerful inoculant.