L_Acacia

joined 4 months ago
[–] L_Acacia@lemmy.ml 1 points 5 hours ago (1 children)

Their software is pretty nice. That's what I'd recommand to someone who doesn't want to tinker. It's just a shame they don't want to open source their software and we have to reinvent the wheel 10 times. If you are willing to tinker a bit koboldcpp + openewebui/librechat is a pretty nice combo.

[–] L_Acacia@lemmy.ml 1 points 5 hours ago

Qwen coder or the new gemma3.

But at this size using privacy respecting api might be both cheaper and lead to better results.

[–] L_Acacia@lemmy.ml 1 points 5 hours ago

Can't you tag the NSFW to filter it out?

[–] L_Acacia@lemmy.ml 2 points 5 hours ago

The project is a bit out of date for newer models, Though Older ones work great.

I recommand ComfyUi if you want fine grained control over the generation and you like to tinker.

Swarm / Reforge / Invoke if you want neat, up to date UI.

[–] L_Acacia@lemmy.ml 1 points 5 hours ago

Most LLM projects support Vulkan if you have enough VRAM

[–] L_Acacia@lemmy.ml 1 points 5 hours ago (3 children)

Well they are fully closed source except for the open source project they are a wrapper on. The open source part is llama.cpp

[–] L_Acacia@lemmy.ml 2 points 5 hours ago* (last edited 3 hours ago)

Try Podman Desktop if you want a GUI to manage your container , and docker desktop is the source of the the crashes. You can run docker images / container / kube through it as well as podman one.

[–] L_Acacia@lemmy.ml 4 points 5 hours ago (2 children)

nixos doesn't play well with rootless containers in my experience

[–] L_Acacia@lemmy.ml 2 points 5 hours ago

Mot really a "leak" for cursor, they are publicly available when you send they request. They just dont show it in the UI

[–] L_Acacia@lemmy.ml 2 points 1 month ago (1 children)

Logseq is also really really slow once you have a lot of notes unfortunately.

[–] L_Acacia@lemmy.ml 11 points 1 month ago (1 children)

It is open-weight, we dont have access to the training code nor the dataset.

That being said it should be safe for your computer to run Deepseeks models since the weight are .safetensors which should block any code execution from injected code in the models weight.

[–] L_Acacia@lemmy.ml 12 points 1 month ago (1 children)

Ollama isn't made by facebook, the llama models are. Ollama is juste a cli wrapper arround llama.cpp, both of which are FOSS projects.

view more: next ›