this post was submitted on 24 Jun 2025
9 points (73.7% liked)
Ollama - Local LLMs for everyone!
194 readers
1 users here now
A place to discuss Ollama, from basic use, extensions and addons, integrations, and using it in custom code to create agents.
founded 1 week ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I currently don't. But I am ollama-curious. I would like to feed it a bunch of technical manuals and then be able to ask it to recite specs or procedures (with optional links to it's source info for sanity checking). Is this where I need to be looking/learning?
you might want to look into RAG and 'long-term memory' concepts. I've been playing around with creating a self-hosted LLM that has long-term memory (using pre-trained models), which is essentially the same thing as you're describing. Also - GPU matters. I'm using an RTX 4070 and it's noticeably slower than something like in-browser chatgpt, but I know 4070 is kinda pricey so many home users might have earlier/slower gpu's.