lily33

joined 2 years ago
[–] lily33@lemm.ee 4 points 2 days ago* (last edited 2 days ago)

An intelligence service monitors social media. They may as well have said, "The sky is blue."

More interesting is,

Sharing as a force multiplier

-- OpenAI

[–] lily33@lemm.ee 6 points 2 days ago* (last edited 2 days ago) (2 children)

Do you know of a provider is actually private? The few privacy policies I checked all had something like "We might keep some of your data for some time for anti-abuse or other reasons"...

[–] lily33@lemm.ee 10 points 4 days ago

Too bad that's based on macros. A full preprocessor could require that all keywords and names in each scope form a prefix code, and then allow us to freely concatenate them.

[–] lily33@lemm.ee 7 points 1 week ago (2 children)

Aren't USAid grants public?

[–] lily33@lemm.ee 4 points 1 week ago* (last edited 1 week ago)

No, that's because social media is mostly used for informal communication, not scientific discourse.

I guarantee you that I would not use lemmy any differently if posts were authenticated with private keys than I do now when posts are authenticated by the user instance. And I'm sure most people are the same.

Edit: Also, people can already authenticate the source, by posting a direct link there. Signing wouldn't really add that much to that.

[–] lily33@lemm.ee 7 points 1 week ago* (last edited 1 week ago) (2 children)

Sure, but that has little to do with disinformation. Misleading/wrong posts don't usually spoof the origin - they post the wrong information in their own name. They might lie about the origin of their "information", sure - but that's not spoofing.

[–] lily33@lemm.ee 26 points 1 week ago* (last edited 1 week ago) (4 children)

I don't understand how this will help deep fake and fake news.

Like, if this post was signed, you would know for sure it was indeed posted by @lily33@lemm.ee, and not by a malicious lemm.ee admin or hacker*. But the signature can't really guarantee the truthfulness of the content. I could make a signed post that claiming that the Earth is flat - or a deep fake video of NASA'a administrator admitting so.

Maybe I'm missing your point?

(*) unless the hacker hacked me directly

[–] lily33@lemm.ee 4 points 1 week ago

It works fine for me on Hyprland.

[–] lily33@lemm.ee 45 points 1 week ago* (last edited 1 week ago) (9 children)

That is why I use just int main(){...} without arguments instead.

[–] lily33@lemm.ee 5 points 1 week ago* (last edited 1 week ago)

https://openrouter.ai/deepseek/deepseek-r1 - offers multiple providers, so at least someone will be up (though note that most are more expensive than Deepseek themselves).

[–] lily33@lemm.ee 3 points 2 weeks ago (1 children)

I don't think any kind of "poisoning" actually works. It's well known by now that data quality is more important than data quantity, so nobody just feeds training data in indiscriminately. At best it would hamper some FOSS AI researchers that don't have the resources to curate a dataset.

[–] lily33@lemm.ee 3 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

What makes these consumer-oriented models different is that that rather than being trained on raw data, they are trained on synthetic data from pre-existing models. That’s what the “Qwen” or “Llama” parts mean in the name. The 7B model is trained on synthetic data produced by Qwen, so it is effectively a compressed version of Qen. However, neither Qwen nor Llama can “reason,” they do not have an internal monologue.

You got that backwards. They're other models - qwen or llama - fine-tuned on synthetic data generated by Deepseek-R1. Specifically, reasoning data, so that they can learn some of its reasoning ability.

But the base model - and so the base capability there - is that of the corresponding qwen or llama model. Calling them "Deepseek-R1-something" doesn't change what they fundamentally are, it's just marketing.

 

This is a meta-question about the community - but seeing how many posts here are made by L4sBot, I think it's important to know how it chooses the articles to post.

I've tried to find information about it, but I couldn't find much.

 

I'm not a lawyer, but my understanding of a license is that it gives me permission to use/distribute something that's otherwise legally protected. For instance, software code is protected by copyright, and FOSS licenses give me the right to distribute it under some conditions.

However, LLMs are produced by a computer, and aren't covered by copyright. So I was hoping someone who has better understanding of law to answer some questions for me:

  1. Is there some legal framework that protects AI models, so that I'd need a license to distribute them? How about using them, since many licenses do restrict use as well.

  2. If the answer to the above is no: By mentioning, following and normalizing LLM licenses, are we essentially helping establish the principle that we do need permission from companies to use their models, and that they have the right to restrict us?

view more: next ›