You can film with an actual camera then use video to video to make it look very AI. If you're just grifting, that would be the way to go I think.
diz
They're also very gleeful about finally having one upped the experts with one weird trick.
Up until AI they were the people who were inept and late at adopting new technology, and now they get to feel that they're ahead (because this time the new half-assed technology was pushed onto them and they didn't figure out they needed to opt out).
I was writing some math code, and not being an idiot I'm using an open source math library for doing something called "QR decomposition", and its efficient, and it supports sparse matrices (matrices where many numbers are 0), etc.
Just out of curiosity I checked where some idiot vibecoder would end up. AI simply plagiarizes from some shit sample snippets which exist purely to teach people what QR decomposition is. It's actually unusable, due to being numerically unstable.
Who in the fuck even needs this shit to be plagiarized, anyway?
It can't plagiarize a production quality implementation, because you can count those on the fingers of one hand, they're complex as fuck and you can't just blend a few together to try to pretend you didn't plagiarize.
The answer is, people who are peddling the AI. They are the ones who ordered plagiarism with extra plagiarism on top. These are not coding tools, these are demos to convince the investors to buy the actual product, which is company's stock. There's a little bit of tool functionality (you can ask them to refactor the code), but it's just you misusing a demo to try to get some value out of it.
And to that end, the demos take every opportunity to plagiarize something, and to talk about how the "AI" wrote the code from scratch based on its supposed understanding of fairly advanced math.
And in coding, it is counter productive to plagiarize. Many of the open source libraries can be used in commercial projects. You get upstream fixes for free. You don't end up with some bugs or worse yet security exploits that may have been fixed since the training cut-off date.
No fucking one in the right mind would willingly want their product to contain copy pasted snippets from stale open source libraries, passed through some sort of variable-renaming copyright laundering machine.
Except of course the business idiots who are in charge of software at major companies, who don't understand software. Who just failed upwards.
They look at plagiarized lines and count them as improved productivity.
Indistinguishable from a business idiot.
Its also interesting that this is the most conservative, pro “its not just memorizing” estimation possible : they multiplied the probabilities of consequent tokens. Basically it means if it starts shitting out a quote it will not be able to stop quoting until their anti copy the whole book finetuning kicks in after 50 words or so.
It can probably output far more under a realistic test (always picking the top token, temperature =0)
If it was a basement dweller with a chatbot that could be mistaken for a criminal co-conspirator, he would've gotten arrested and his computer seized as evidence, and then it would be a crapshoot if he would even be able to convince a jury that it was an accident. Especially if he was getting paid for his chatbot. Now, I'm not saying that this is right, just stating how it is for normal human beings.
It may not be explicitly illegal for a computer to do something, but you are liable for what your shit does. You can't just make a robot lawnmower and run over a neighbor's kid. If you are using random numbers to steer your lawnmower... yeah.
But because it's OpenAI with 300 billion dollar "valuation", absolutely nothing can happen whatsoever.
In theory, at least, criminal justice's purpose is prevention of crimes. And if it would serve that purpose to arrest a person, it would serve that same purpose to court-order a shutdown of a chatbot.
There's no 1st amendment right to enter into criminal conspiracies to kill people. Not even if "people" is Sam Altman.
It's curious how if ChatGPT was a person - saying exactly the same words - he would've gotten charged with a criminal conspiracy, or even shot, as its human co-conspirator in Florida did.
And had it been a foreign human in the middle east, radicalizing random people, he would've gotten a drone strike.
"AI" - and the companies building them - enjoy the kind of universal legal immunity that is never granted to humans. That needs to end.
I appreciate the sentiment but I also hate the whole "AI is a power loom for coding".
The power loom for coding is called "git clone".
What "AI" (LLM) tools provide is just English as a programming language with plagiarized sum total of all open source as the standard library. English is a shit programming language. LLMs are shit at compiling it. Open source is awesome. Plagiarized open source is "meh" - you can not apply upstream patches.
So, the judge says:
In cases involving uses like Meta’s, it seems like the plaintiffs will often win, at least where those cases have better-developed records on the market effects of the defendant’s use.
And what is that supposed to ever look like? Do authors need a better developed record of effects of movies on book sales, to get paid for movie adaptations, too?
It's called sarcasm.
Yeah I'm thinking that people who think their brains work like LLM may be somewhat correct. Still wrong in some ways as even their brains learn from several orders of magnitude less data than LLMs do, but close enough.