Yeah any time its regurgitating an IMO problem it’s a proof it’salmost superhuman, but any time it actually faces a puzzle with unknown answer, this is not what it is for.
diz
Further support for the memorization claim: I posted examples of novel river crossing puzzles where LLMs completely fail (on this forum).
Note that Apple’s actors / agents river crossing is a well known “jealous husbands” variant, which you can ask a chatbot to explain to you. It gladly explains, even as it can’t follow its own explanation (since of course it isn’t its own explanation but a plagiarized one, even if changes words).
edit: https://awful.systems/post/4027490 and earlier https://awful.systems/post/1769506
I think what I need to do is to write up a bunch of puzzles, assign them randomly to 2 sets, and test & post one set, while holding back on the second set (not even testing it on any online chatbots). Then in a year or two see how much the set that's public improves, vs the one that's held back.
making LLMs not say racist shit
That is so 2024. The new big thing is making LLMs say racist shit.
Can’t be assed to read the bs but sometimes the use after free only happens in some rarely executed code path, or only when one branch is executed then later another branch. So you still may need fuzzing to trigger use after free for Valgrind to detect.
Chatbots ate my cult.
I swear I’m gonna plug an LLM into a rather traditional solver I’m writing. I may tuck deep into the paper a point how it’s quite slow to use an LLM to mutate solutions in a genetic algorithm or a swarm solver. And in any case non LLM would be default.
Normally I wouldn’t sink that low but I got mouths to feed, and frankly, fuck it, they can persist in this madness for much longer than I can stay solvent.
This is as if there was a mass delusion that a pseudorandom number generator can serve as an oracle, predicting the future. Doing any kind of Monte Carlo simulation of something like weather in that world would of course confirm all the dumb shit.
I wonder what's gonna happen first, the bubble popping or Yudkowsky getting so fed up with gen AI he starts sneering.
Yeah plenty of opportunities to just work it into the story.
I dunno what kind of local models you can use, though. If it is a 3D game then its fine to require a GPU, but you wouldn't want to raise minimum requirements too high. And you wouldn't want to use 12 gigs of vram for a gimmick, either.
I think it could work as a minor gimmick, like terminal hacking minigame in fallout. You have to convince the LLM to tell you the password, or you get to talk to a demented robot whose brain was fried by radiation exposure, or the like. Relatively inconsequential stuff like being able to talk your way through or just shoot your way through.
Unfortunately this shit is too slow and too huge to embed a local copy of, into a game. You need a lot of hardware compatibility. And running it in the cloud would cost too much.
I was trying out free github copilot to see what the buzz is all about:
It doesn't even know its own settings. This one little useful thing that isn't plagiarism, providing natural language interface to its own bloody settings, it couldn't do.
All joking aside, there is something thoroughly fucked up about this.
What's fucked up is that we let these rich fucks threaten us with extinction to boost their stock prices.
Imagine if some cold fusion scammer was permitted to gleefully boast that his experimental cold fusion plant in the middle of a major city could blow it up. Setting up little hydrogen explosions, setting up a neutron source just to make it spicier, etc.
I’d just write the list then assign randomly. Or perhaps pseudorandomly like sort by hash and then split in two.
One problem is that it is hard to come up with 20 or more completely unrelated puzzles.
Although I don’t think we need a large number for statistical significance here, if it’s like 8/10 solved in the cheating set and 2/10 in the hold back set.