this post was submitted on 23 Jul 2025
25 points (96.3% liked)

homeassistant

15517 readers
6 users here now

Home Assistant is open source home automation that puts local control and privacy first.
Powered by a worldwide community of tinkerers and DIY enthusiasts.

Home Assistant can be self-installed on ProxMox, Raspberry Pi, or even purchased pre-installed: Home Assistant: Installation

Discussion of Home-Assistant adjacent topics is absolutely fine, within reason.
If you're not sure, DM @GreatAlbatross@feddit.uk

founded 2 years ago
MODERATORS
 

What is everyone using for the LLM model for HA voice when selfhosting ollama? I've tried llama and qwen with varying degrees of understanding my commands. I'm currently on llama as it appears a little better. I just wanted to see if anyone found a better model.

Edit: as pointed out, this is more of a speech to text issue than llm model. I'm looking into the alternatives to whisper

you are viewing a single comment's thread
view the rest of the comments
[–] doodlebob@lemmy.world 2 points 3 days ago (1 children)
[–] spitfire@lemmy.world 2 points 3 days ago (1 children)

So basically for people who have graphic cards with 24GB VRAM (or more). While I do, it's probably something most people don't ;)

[–] doodlebob@lemmy.world 2 points 3 days ago (1 children)

Yeah, I went a little crazy with it and built out a server just for AI/ML stuff 😬

[–] spitfire@lemmy.world 1 points 3 days ago

I could probably run something on my gaming PC with 3090, but that would be a big cost. Instead I've just put my old 2070 in an existing server and using it for more lightweight stuff (TTS, obico, frigate, ollama with some small model).