this post was submitted on 05 Aug 2025
27 points (100.0% liked)

Free Open-Source Artificial Intelligence

3816 readers
1 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
 

Recent DeepSeek, Qwen, GLM models have impressive results in benchmarks. Do you use them through their own chatbots? Do you have any concerns about what happens to the data you put in there? If so, what do you do about it?

I am not trying to start a flame war around the China subject. It just so happens that these models are developed in China. My concerns with using the frontends also developed in China stem from:

  • A pattern that many Chinese apps in the past have been found to have minimal security
  • I don't think any of the 3 listed above let you opt out of using your prompts for model training

I am also not claiming that non-China-based chatbots don't have privacy concerns, or that simply opting out of training gets you much on the privacy front.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] xodoh74984@lemmy.world 9 points 1 week ago* (last edited 1 week ago) (9 children)

I use open source 32b Chinese models almost exclusively, because I can run them on my own machine without being a data cow for the US tech oligarchs or the CCP.

I only use the larger models for little hobby projects, and I don't care too much about who gets that data. But if I wanted to use the large models for something sensitive, the open source Chinese models are the more secure option IMO. Rather than get a "trust me bro" pinky promise from Closed AI or Anthropic, I can run Qwen or Kimi on a cloud GPU provider that offers raw compute by the hour without any data harvesting.

[โ€“] Valmond@lemmy.world 2 points 1 week ago (8 children)

Any idea about the minimum specs to run them locally? Especially VRAM.

load more comments (7 replies)
load more comments (7 replies)