this post was submitted on 31 Dec 2024
81 points (71.9% liked)

Firefox

20324 readers
238 users here now

/c/firefox

A place to discuss the news and latest developments on the open-source browser Firefox.


Rules

1. Adhere to the instance rules

2. Be kind to one another

3. Communicate in a civil manner


Reporting

If you would like to bring an issue to the moderators attention, please use the "Create Report" feature on the offending comment or post and it will be reviewed as time allows.


founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] adarza@lemmy.ca 140 points 6 months ago (3 children)
  • no account or login required.
  • it's an addon (and one you have to go get), not baked-in.
  • limited to queries about content you're currently looking at.
    (it's not a general 'search' or queries engine)
  • llm is hosted by mozilla, not a third party.
  • session histories are not retained or shared, not even with mistral (it's their model).
  • user interactions are not used to train.
[–] jeena@piefed.jeena.net 26 points 6 months ago (3 children)

Thanks for the summary. So it still sends the data to a server, even if it's Mozillas. Then I still can't use it for work, because the data is private and they wouldn't appreciate me sending their data toozilla.

[–] KarnaSubarna@lemmy.ml 21 points 6 months ago (1 children)

In such scenario you need to host your choice of LLM locally.

[–] ReversalHatchery@beehaw.org 5 points 6 months ago (1 children)

does the addon support usage like that?

[–] KarnaSubarna@lemmy.ml 7 points 6 months ago (1 children)

No, but the “AI” option available on Mozilla Lab tab in settings allows you to integrate with self-hosted LLM.

I have this setup running for a while now.

[–] cmgvd3lw@discuss.tchncs.de 4 points 6 months ago (1 children)

Which model you are running? Who much ram?

[–] KarnaSubarna@lemmy.ml 3 points 6 months ago* (last edited 6 months ago)

My (docker based) configuration:

Software stack: Linux > Docker Container > Nvidia Runtime > Open WebUI > Ollama > Llama 3.1

Hardware: i5-13600K, Nvidia 3070 ti (8GB), 32 GB RAM

Docker: https://docs.docker.com/engine/install/

Nvidia Runtime for docker: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Open WebUI: https://docs.openwebui.com/

Ollama: https://hub.docker.com/r/ollama/ollama

[–] LWD@lemm.ee 12 points 6 months ago* (last edited 2 weeks ago)

deleted by creator

[–] Hamartiogonic@sopuli.xyz -2 points 6 months ago* (last edited 6 months ago)

According to Microsoft, you can safely send your work related stuff to Copilot. Besides, most companies already use a lot of their software and cloud services, so LLM queries don’t really add very much. If you happen to be working for one of those companies, MS probably already knows what you do for a living, hosts your meeting notes, knows your calendar etc.

If you’re working for Purism, RedHat or some other company like that, you might want to host your own LLM instead.

[–] fruitycoder@sh.itjust.works 9 points 6 months ago

That's really cool to see. A trusted hosted open source model is really missing in the ecosystem to me. I really like the idea of web centric integration too.