I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.
Part of me suspects they probably also aren't the sharpest knives in OpenAI's drawer.
I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.
Part of me suspects they probably also aren't the sharpest knives in OpenAI's drawer.
ah, jeez, AI bros are trying to make deepfakes even fucking worse:
Deep-Live-Cam is trending #1 on github. It enables anyone to convert a single image into a LIVE stream deepfake, instant and immediately
Most of the replies are openly lambasting this shit like it deserves, thankfully
and really from the demos it looks like a user wouldn’t have to do anything at all besides write “summarize my emails” once. No need to click on anything for confidential info to be exfiltrated if the chatbot can already download arbitrary URLs based on the prompt injection!
We're gonna see a whole lotta data breaches in the upcoming months - calling it right now.
Rare Airbnb W
Local models are theoretically safer, by virtue of not being connected to the company which tried to make Recall a thing, but they're still LLMs at the end of the day - they're still loaded with vulnerabilities, and will remain a data breach waiting to happen unless you make sure its rendered basically useless.
I'm sure such blatant and unrepentant price gouging won't end in any violent altercations from infuriated customers!
(Ah, who am I kidding, somebody's gonna blow their lid over Kroger jacking up water prices on a hot day. They'll be lucky if nobody gets shot before they ditch the idea.)
As a personal sidenote, part of me says the “Self-Aware AI Doomsday” criti-hype might end up coming back to bite OpenAI in the arse if/when one of those DoD tests goes sideways.
Plenty of time and money's been spent building up this idea of spicy autocomplete suddenly turning on humanity and trying to kill us all. If and when one of those spectacular disasters you and Amy predicted does happen, I can easily see it leading to wild stories of ChatGPT going full Terminator or some shit like that.
Also, this is just an impromptu addendum to my extended ramble on the AI bubble crippling tech's image, but I can easily see military involvement in AI building public resentment/stigma against the industry further.
Any military use of AI is already gonna be seen in a warcrimey light thanks to Israel using it in their Gaza Geneva Checklist Speedrun - add in the public being fully aware of your average LLM's, shall we say, tenuous connection to reality, and you have a recipe for people immediately assuming the worst.
Same here. Any kind of jab's a PITA for me, and anything intravenous is some of the worst shit I've ever experienced, but I've gritted my teeth and gotten through them no problem.
Granted, whoever tries to put these into production is probably gonna give it a belt-fed or some shit like that. A gunbot isn't much of a gunbot unless you've got at least a couple hundred rounds ready to go.
Picked up an oddly good sneer from a gen-AI CEO, of all people (thanks to @ai_shame for catching it):