scruiser

joined 2 years ago
[–] scruiser@awful.systems 12 points 1 month ago (7 children)

The wikipedia talk page is some solid sneering material. It's like Habryka and HandofLixue can't imagine any legitimate reason why Wikipedia has the norms it does, and they can't imagine how a neutral Wikipedian could come to write that article about lesswrong.

Eigenbra accurately calling them out...

"I also didn't call for any particular edits". You literally pointed to two sentences that you wanted edited.

Your twitter post also goes against Wikipedia practices by casting WP:ASPERSIONS. I can't speak for any of the other editors, but I can say I have never read nor edited RationalWiki, so you might be a little paranoid in that regard.

As to your question:

Was it intentional to try to pick a fight with Wikipedians?

It seems to be ignorance on Habyrka's part, but judging by the talk page, instead of acknowledging their ignorance of Wikipedia's reasonable policies, they seem to be doubling down.

[–] scruiser@awful.systems 7 points 1 month ago (1 children)

Also lol at the 2027 guys believing anything about how grok was created.

Judging by various comments the AI 2027 authors have made, sucking up to techbro side of the alt-right was in fact a major goal of AI 2027, and, worryingly they seem to have succeeded somewhat (allegedly JD Vance has read AI 2027) but lol at the notion they could ever talk any of the techbro billionaires into accepting any meaningful regulation. They still don't understand their doomerism is free marketing hype for the techbros, not anything any of them are actually treating as meaningfully real.

[–] scruiser@awful.systems 6 points 1 month ago

Yeah AI 2027's model fails back of the envelope sketches as soon as you try working out any features of it, which really draws into question the competency of it's authors and everyone that has signal boosted it. Like they could have easily generated the same crit-hype bullshit with "just" an exponential model, but for whatever reason they went with this model. (They had a target date they wanted to hit? They correctly realized adding in extraneous details would wow more of their audience? They are incapable of translating their intuitions into math? All three?)

[–] scruiser@awful.systems 11 points 1 month ago (1 children)

We did make fun of titotal for the effort they put into meeting rationalist on their own terms and charitably addressing their arguments and you know, being an EA themselves (albeit one of the saner ones)...

[–] scruiser@awful.systems 13 points 1 month ago* (last edited 1 month ago) (8 children)

So us sneerclubbers correctly dismissed AI 2027 as bad scifi with a forecasting model basically amounting to "line goes up", but if you end up in any discussions with people that want more detail titotal did a really detailed breakdown of why their model is bad, even given their assumptions and trying to model "line goes up": https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models

tldr; the AI 2027 model, regardless of inputs and current state, has task time horizons basically going to infinity at some near future date because they set it up weird. Also the authors make a lot of other questionable choices and have a lot of other red flags in their modeling. And the picture they had in their fancy graphical interactive webpage for fits of the task time horizon is unrelated to the model they actually used and is missing some earlier points that make it look worse.

[–] scruiser@awful.systems 8 points 1 month ago

If you wire the LLM directly into a proof-checker (like with AlphaGeometry) or evaluation function (like with AlphaEvolve) and the raw LLM outputs aren't allowed to do anything on their own, you can get reliability. So you can hope for better, it just requires a narrow domain and a much more thorough approach than slapping some extra firm instructions in an unholy blend of markup languages in the prompt.

In this case, solving math problems is actually something Google search could previously do (before dumping AI into it) and Wolfram Alpha can do, so it really seems like Google should be able to offer a product that does math problems right. Of course, this solution would probably involve bypassing the LLM altogether through preprocessing and post processing.

Also, btw, LLM can be (technically speaking) deterministic if the heat is set all the way down, its just that this doesn't actually improve their performance at math or anything else. And it would still be "random" in the sense that minor variations in the prompt or previous context can induce seemingly arbitrary changes in output.

[–] scruiser@awful.systems 9 points 1 month ago (3 children)

Have they fixed it as in genuinely uses python completely reliably or "fixed" it, like they tweaked the prompt and now it use python 95% of the time instead of 50/50? I'm betting on the later.

[–] scruiser@awful.systems 18 points 1 month ago

We barely understsnd how LLMs actually work

I would be careful how you say this. Eliezer likes to go on about giant inscrutable matrices to fearmoner, and the promptfarmers use the (supposed) mysteriousness as another avenue for crithype.

It's true reverse engineering any specific output or task takes a lot of effort and requires access to the model's internals weights and hasn't been done for most tasks, but the techniques exist for doing so. And in general there is a good high level conceptual understanding of what makes LLMs work.

which means LLMs don’t understand their own functioning (not that they “understand” anything strictly speaking).

This part is absolutely true. If you catch them in mistake, most of their data about responding is from how humans respond, or, at best fine-tuning on other LLM output and they don't have any way of checking their own internals, so the words they say in response to mistakes is just more bs unrelated to anything.

[–] scruiser@awful.systems 15 points 1 month ago

Example #"I've lost count" of LLMs ignoring instructions and operating like the bullshit spewing machines they are.

[–] scruiser@awful.systems 17 points 1 month ago (1 children)

Another thing that's been annoying me about responses to this paper... lots of promptfondlers are suddenly upset that we are judging LLMs by abitrary puzzle solving capabilities... as opposed to the arbitrary and artificial benchmarks they love to tout.

[–] scruiser@awful.systems 26 points 1 month ago (2 children)

So, I've been spending too much time on subreddits with heavy promptfondler presence, such as /r/singularity, and the reddit algorithm keeps recommending me subreddit with even more unhinged LLM hype. One annoying trend I've noted is that people constantly conflate LLM-hybrid approaches, such as AlphaGeometry or AlphaEvolve (or even approaches that don't involve LLMs at all, such as AlphaFold) with LLMs themselves. From their they act like of course LLMs can [insert things LLMs can't do: invent drugs, optimize networks, reliably solve geometry exercise, etc.].

Like I saw multiple instances of commenters questioning/mocking/criticizing the recent Apple paper using AlphaGeometry as a counter example. AlphaGeometry can actually solve most of the problems without an LLM at all, the LLM component replaces a set of heuristics that make suggestions on proof approaches, the majority of the proof work is done by a symbolic AI working with a rigid formal proof system.

I don't really have anywhere I'm going with this, just something I noted that I don't want to waste the energy repeatedly re-explaining on reddit, so I'm letting a primal scream out here to get it out of my system.

view more: ‹ prev next ›