this post was submitted on 24 Dec 2024
-2 points (33.3% liked)

Artificial Intelligence

1436 readers
1 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 2 years ago
top 2 comments
sorted by: hot top controversial new old
[–] bloup@lemmy.sdf.org 5 points 2 months ago (1 children)

In my experience, every time there’s been a new model I’m pretty astonished by its capabilities for mathematics and programming. But every single time it seems to rapidly regress to worse than it was before the new model was released. I’m guessing that there is some kind of loss leader thing going on where they support the model with a completely unsustainable level of compute to hook you and then throttle it somehow to improve the economics for the business.

[–] sith@lemmy.zip 0 points 2 months ago* (last edited 2 months ago)

It's for sure not impossible. But my guess is that it's because you learn the new model and your behavior and expectations change. It's a known phenomenon and I do believe the developers/companies when they say that they didn't change anything. It's also quite easy to verify/test this hypothesis by using locally hosted LLMs. There are probably a few papers covering this already.

Though it does happen that one is downgraded to a smaller model when using free versions OpenAI, Anthropic and others. But my experience is that this information allways is explicit in the UI. Still, it's probably quite easy to miss.

Also, I'm almost exclusively using the free version of Mistral Large (Le Chat) and I've never experienced regression. But Mistral also never downgrades, it just becomes very slow.