nednobbins

joined 2 years ago
[–] nednobbins@lemm.ee 4 points 6 days ago

Over the Iran attack? I'm pretty sure he broke ranks years ago.

[–] nednobbins@lemm.ee 11 points 1 week ago

Yes. It's crazy. That's why the vast majority of us don't do it.
It's one thing to be a vegetarian for health or environmental reasons.
When you try to convince people that meat==murder, you come across as a wackadoodle.

[–] nednobbins@lemm.ee 15 points 2 weeks ago (5 children)

That's great news. The other 9 of the 10 biggest protests were were extremely successful at affecting change.

Since we made such massive progress on all the others, this is clearly a harbinger of social and political progress.

[–] nednobbins@lemm.ee 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

How would you react if you saw a similar exchange between MAGAs?

MAGA A: .
MAGA B: You don't really mean that, right? It's not all of them.
MAGA A: I'm just joking. Relax.

Would you take that response at face value or would you assume that the joke is a thinly veiled statement of their actual beliefs?

[–] nednobbins@lemm.ee 23 points 2 weeks ago (2 children)

Fuck the whole HP franchise.

It was always shitty writing and the plot was garbage. The whole story was a thinly veiled glorification of British exceptionalism.

The only saving grace of that stinking turd of a franchise is that, in the '90s, it seemed like a good way to get kids to read.

[–] nednobbins@lemm.ee 2 points 2 weeks ago

I wouldn't either but that's exactly what lmsys.org found.

That blog post had ratings between 858 and 1169. Those are slightly higher than the average rating of human users on popular chess sites. Their latest leaderboard shows them doing even better.

https://lmarena.ai/leaderboard has one of the Gemini models with a rating of 1470. That's pretty good.

[–] nednobbins@lemm.ee 2 points 2 weeks ago (2 children)

I imagine the "author" did something like, "Search http://google.scholar.com/ find a publication where AI failed at something and write a paragraph about it."

It's not even as bad as the article claims.

Atari isn't great at chess. https://chess.stackexchange.com/questions/24952/how-strong-is-each-level-of-atari-2600s-video-chess
Random LLMs were nearly as good 2 years ago. https://lmsys.org/blog/2023-05-03-arena/
LLMs that are actually trained for chess have done much better. https://arxiv.org/abs/2501.17186

[–] nednobbins@lemm.ee 1 points 2 weeks ago

Like humans are way better at answering stuff when it’s a collaboration of more than one person. I suspect the same is true of LLMs.

It is.

It's really common for non-language implementations of neural networks. If you have an NN that's right some percentage of the time, you can often run it through a bunch of copies of the NNs and take the average and that average is correct a higher percentage of the time.

Aider is an open source AI coding assistant that lets you use one model to plan the coding and a second one to do the actual coding. It works better than doing it in a single pass, even if you assign the the same model to planing and coding.

[–] nednobbins@lemm.ee 22 points 2 weeks ago

The Israelis regularly murder journalists and civilians. The danger was quite real.

[–] nednobbins@lemm.ee 50 points 2 weeks ago (12 children)

Sometimes it seems like most of these AI articles are written by AIs with bad prompts.

Human journalists would hopefully do a little research. A quick search would reveal that researches have been publishing about this for over a year so there's no need to sensationalize it. Perhaps the human journalist could have spent a little time talking about why LLMs are bad at chess and how researchers are approaching the problem.

LLMs on the other hand, are very good at producing clickbait articles with low information content.

[–] nednobbins@lemm.ee 1 points 3 weeks ago (1 children)

That makes sense. Not everything needs to be testable. There are many interesting and important ideas outside of science.

The main problem would be if someone wanted to set policy based on it. That includes the implicit experiment of, "If we adopt policy A we can expect outcome B." If we haven't tested that before turning it into a policy, the policy itself becomes the experiment, and then we need to be very careful about the ethics surrounding such an experiment.

 

我学习汉语。我总找朋友训练中文。

view more: next ›