this post was submitted on 11 Jul 2025
52 points (100.0% liked)

Fuck AI

3433 readers
866 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

There have been multiple things which have gone wrong with AI for me but these two pushed me over the brink. This is mainly about LLMs but other AI has also not been particularly helpful for me.

Case 1

I was trying to find the music video from where a screenshot was taken.

I provided o4 mini the image and asked it where it is from. It rejected it saying that it does not discuss private details. Fair enough. I told it that it is xyz artist. It then listed three of their popular music videos, neither of which was the correct answer to my question.

Then I started a new chat and described in detail what the screenshot was. It once again regurgitated similar things.

I gave up. I did a simple reverse image search and found the answer in 30 seconds.

Case 2

I wanted a way to create a spreadsheet for tracking investments which had xyz columns.

It did give me the correct columns and rows but the formulae for calculations were off. They were almost correct most of the time but almost correct is useless when working with money.

I gave up. I manually made the spreadsheet with all the required details.

Why are LLMs so wrong most of the time? Aren’t they processing high quality data from multiple sources? I just don’t understand the point of even making these softwares if all they can do is sound smart while being wrong.

you are viewing a single comment's thread
view the rest of the comments
[–] Voroxpete@sh.itjust.works 73 points 13 hours ago* (last edited 13 hours ago) (8 children)

Aren’t they processing high quality data from multiple sources?

Here's where the misunderstanding comes in, I think. And it's not the high quality data or the multiple sources. It's the "processing" part.

It's a natural human assumption to imagine that a thinking machine with access to a huge repository of data would have little trouble providing useful and correct answers. But the mistake here is in treating these things as thinking machines.

That's understandable. A multi-billion dollar propaganda machine has been set up to sell you that lie.

In reality, LLMs are word prediction machines. They try to predict the words that would likely follow other words. They're really quite good at it. The underlying technology is extremely impressive, allowing them to approximate human conversation in a way that is quite uncanny.

But what you have to grasp is that you're not interacting with something that thinks. There isn't even an attempt to approximate a mind. Rather, what you have is a confabulation engine; a machine for producing plausible fictions. It does this by creating unbelievably huge matrices of words - literally operating in billions of dimensions at once, graphs with many times more axes than we have letters - and probabilistically associating them with each other. It's all very clever, but what it produces is 100% fake, made up, totally invented.

Now, because of the training data they've been fed, those made up answers will, depending on the question, sometimes ends up being right. For certain types of question they can actually be right quite a lot of the time. For other types of question, almost never. But the point is, they're only ever right by accident. The "AI" is always, always constructing a fiction. That fiction just sometimes aligns with reality.

[–] kayohtie@pawb.social 1 points 7 hours ago (1 children)

Even the "thinking engine" ones are wild to watch in motion, if you ever turn on debugging. It's like watching someone substitute the autosuggest of your keyboard for what words appear in your head when trying to think through something. It just generates something and then generates again using THAT output (multiple times maybe involved for each step).

I watched one I installed locally for Home Assistant, as a test for various operations, just start repeating itself over and over to nearly everything before it just spat out something completely wrong.

Garbage engines.

[–] Voroxpete@sh.itjust.works 3 points 4 hours ago

I assume by "thinking engine" you mean "Reasoning AI".

Reasoning AI is just more bullshit. What happens is that they produce the output the way they always do - by guessing at a sequence of words that is statistically adjacent to the input they're given - but then what they do is produce a randomly generated "Chain of thought" which is invented in the same way as the result; just pure statistical word association. Essentially they create the output the same way that a non-reasoning LLM does, then they give r themselves the prompt "Write a chain of thought for this output." There's a little extra stuff going on where they sort of check their own output, but in essence that's just done by running the model multiple times and picking the output they converge on. So, just weighting the randomness, basically.

I'm simplifying a lot here obviously, but that's pretty much what's going on.

load more comments (6 replies)