Qwen coder or the new gemma3.
But at this size using privacy respecting api might be both cheaper and lead to better results.
Qwen coder or the new gemma3.
But at this size using privacy respecting api might be both cheaper and lead to better results.
Can't you tag the NSFW to filter it out?
Most LLM projects support Vulkan if you have enough VRAM
Well they are fully closed source except for the open source project they are a wrapper on. The open source part is llama.cpp
Try Podman Desktop if you want a GUI to manage your container , and docker desktop is the source of the the crashes. You can run docker images / container / kube through it as well as podman one.
nixos doesn't play well with rootless containers in my experience
Mot really a "leak" for cursor, they are publicly available when you send they request. They just dont show it in the UI
Logseq is also really really slow once you have a lot of notes unfortunately.
It is open-weight, we dont have access to the training code nor the dataset.
That being said it should be safe for your computer to run Deepseeks models since the weight are .safetensors which should block any code execution from injected code in the models weight.
Ollama isn't made by facebook, the llama models are. Ollama is juste a cli wrapper arround llama.cpp, both of which are FOSS projects.
Their software is pretty nice. That's what I'd recommand to someone who doesn't want to tinker. It's just a shame they don't want to open source their software and we have to reinvent the wheel 10 times. If you are willing to tinker a bit koboldcpp + openewebui/librechat is a pretty nice combo.