this post was submitted on 22 Jun 2025
108 points (73.9% liked)

Programming Humor

3190 readers
1 users here now

Related Communities !programmerhumor@lemmy.ml !programmer_humor@programming.dev !programmerhumor@kbin.social !programming_horror@programming.dev

Other Programming Communities !programming@beehaw.org !programming@programming.dev !programming@lemmy.ml !programming@kbin.social !learn_programming@programming.dev !functional_programming@programming.dev !embedded_prog@lemmy.ml

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] doomcanoe@sh.itjust.works 26 points 5 days ago (1 children)

Everyone here saying "no shit, LLMs were not even designed to play chess" are not the people who this is directed at.

Multiple times at my job I have had to explain, often to upper management, that LLMs are not AGIs.

Stories like these help an under informed general public to wrap their heads around the idea that a "computer that can talk" =/= a "computer that can truly think/reason".

[–] MolecularCactus1324@lemmy.world 6 points 5 days ago* (last edited 2 days ago)

They say LLMs can “reason” now, but they obviously can’t. At best, they could be trained to write a code snippet and run the code to get the answer. I’ve noticed when asked to do math ChatGPT will now translate my math question into python and run that to get the answer, since it can’t do math itself reliably.

There are algorithms for playing chess that win by analyzing every possible move for 5, 10, 100, or more moves in advance and choosing the one most likely to lead to an optimal outcome. This is essentially what the Atari game is probably doing. LLMs could probably be given the tools to run that algorithm themselves. However, the LLM itself can’t possibly do the same thing.