herseycokguzelolacak

joined 2 days ago

Swearing in source code points to a healthy and organic development.

Not on top of my head, but there must be something. llama.cpp and vllm have basically solved the inference problem for LLMs. What you need is a RAG solution on top that also combines it with web search.

[–] herseycokguzelolacak@lemmy.ml 10 points 2 hours ago (1 children)

Wasn't this always the case? I remember flying into the US during the Biden era as a tourist and had to declare my social media accounts.

[–] herseycokguzelolacak@lemmy.ml 5 points 1 day ago (2 children)

for coding tasks you need web search and RAG. It's not the size of the model that matters, since even the largest models find solutions online.

[–] herseycokguzelolacak@lemmy.ml 89 points 1 day ago (5 children)

This is a ''The worst person you know just made a great point.'' moment, isn't it?

LLMs are great at automating tasks where we know the solution. And there are a lot of workflows that fall in this category. They are horrible at solving new problems, but that is not where the opportunity for LLMs is anyway.

For VLMs I love Moondream2. It's a tiny model that packs a punch way above its size. Llama.cpp supports it.

Israel is a terrorist entity.

Iran has a right to defend itself from Israeli terrorism.