this post was submitted on 11 Feb 2025
530 points (98.7% liked)

Technology

63134 readers
3579 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 31 points 1 week ago* (last edited 1 week ago) (24 children)

What temperature and sampling settings? Which models?

I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

I find my local thinking models (FuseAI, Arcee, or Deepseek 32B 5bpw at the moment) are quite good at summarization at a low temperature, which is not what these UIs default to, and I get to use better sampling algorithms than any of the corporate APis. Same with “affordable” flagship API models (like base Deepseek, not R1). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

My point is that LLMs as locally hosted tools you understand the mechanics/limitations of are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification and crypto-bro type hype in one package.

[–] MoonlightFox@lemmy.world 1 points 1 week ago (4 children)

I have been pretty impressed by Gemini 2.0 Flash.

Its slightly worse than the very best on the benchmarks I have seen, but is pretty much instant and incredibly cheap. Maybe a loss leader?

Anyways, which model of the commercial ones do you consider to be good?

[–] brucethemoose@lemmy.world 2 points 1 week ago* (last edited 1 week ago) (3 children)

benchmarks

Benchmarks are so gamed, even Chatbot Arena is kinda iffy. TBH you have to test them with your prompts yourself.

Honestly I am getting incredible/creative responses from Deepseek R1, the hype is real, though its frequently overloaded. Tencent's API is a bit under-rated. If llama 3.3 70B is smart enough for you, Cerebras API is super fast.

Qwen Max is... not bad? The reasoning models kinda spoiled me, but I think they have more reasoning releases coming.

MiniMax is ok for long context, but I still tend to lean on Gemini for this.

I dunno about Claude these days, as its just so expensive. I haven't touched OpenAI in a long time.

Oh, and sometimes "weird" finetunes you can find on OpenRouter or whatever will serve niches much better than "big" API models.

EDIT:

Locally, I used to hop around, but now I pretty much always run a Qwen 32B finetune. Either coder, Arcee Distill, FuseAI, R1, EVA-Gutenberg, or Openbuddy, usually.

[–] MoonlightFox@lemmy.world 1 points 1 week ago (1 children)

So there is not any trustworthy benchmarks I can currently use to evaluate? That in combination with my personal anecdotes is how I have been evaluating them.

I was pretty impressed with Deepseek R1. I used their app, but not for anything sensitive.

I don't like that OpenAI defaults to a model I can't pick. I have to select it each time, even when I use a special URL it will change after the first request

I am having a hard time deciding which models to use besides a random mix between o3-mini-high, o1, Sonnet 3.5 and Gemini 2 Flash

[–] brucethemoose@lemmy.world 2 points 1 week ago

Heh, only obscure ones that they can't game, and only if they fit your use case. One example is the ones in EQ bench: https://eqbench.com/

…And again, the best mix of models depends on your use case.

I can suggest using something like Open Web UI with APIs instead of native apps. It gives you a lot more control, more powerful tooling to work with, and the ability to easily select and switch between models.

load more comments (1 replies)
load more comments (1 replies)
load more comments (20 replies)