I've been working on something similar-ish on and off.
There are three (good) solutions involving open-source models that I came across:
- KenLM/STT
- DeepSpeech
- Vosk
Vosk has the best models. But they are large. You can't use the gigaspeech model for example (which is useful even with non-US english) to live-generate subs on many devices, because of the memory requirements. So my guess would be, whatever VLC will provide will probably suck to an extent, because it will have to be fast/lightweight enough.
What also sets vosk-api apart is that you can ask it to provide multiple alternatives (10 is usually used).
One core idea in my tool is to combine all alternatives into one text. So suppose the model predicts text to be either "... still he ..." or "... silly ...". My tool can give you "... (still he|silly) ..." instead of 50/50 chancing it.
/tmp
is stilltmpfs
here, but I prefer this (below) nowadays for usages that are not typical/tmp
usages:This is a part of my zram devices initialization script, a bunch used for swap as usual, then ending with this.
modprobe zram num_devices=0
works if one likes tohot_add
all devices.From zram you get a block device with builtin compression. From ext4, you still get features like creation time, truncate and fallocate support, ...etc. And with discard, ram usage will be limited to used space (like tmpfs). Also ext4 is used without a journal to avoid what would be useless overhead in this use-case.