this post was submitted on 22 Jun 2025
108 points (73.9% liked)

Programming Humor

3190 readers
1 users here now

Related Communities !programmerhumor@lemmy.ml !programmer_humor@programming.dev !programmerhumor@kbin.social !programming_horror@programming.dev

Other Programming Communities !programming@beehaw.org !programming@programming.dev !programming@lemmy.ml !programming@kbin.social !learn_programming@programming.dev !functional_programming@programming.dev !embedded_prog@lemmy.ml

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Blue_Morpho@lemmy.world 0 points 5 days ago

It’s a fundamental limitation of how LLMs work.

LLMs have been adding reasoning front ends to them like O3 and deep seek. That's why they can solve problems that simple LLM's failed at.

I found one reference to O3 rated at chess level 800 but I'd really like to see Atari chess vs O3. My telling my friend how I think it would fail isn't convincing.