blakestacey

joined 2 years ago
MODERATOR OF
[–] blakestacey@awful.systems 4 points 4 hours ago (1 children)

From Yud's remarks on Xitter:

As much as people might like to joke about how little skill it takes to found a $2B investment fund, it isn't actually true that you can just saunter in as a psychotic IQ 80 person and do that.

Well, not with that attitude.

You must be skilled at persuasion, at wearing masks, at fitting in, at knowing what is expected of you;

If "wearing masks" really is a skill they need, then they are all susceptible to going insane and hiding it from their coworkers. Really makes you think (TM).

you must outperform other people also trying to do that, who'd like that $2B for themselves. Winning that competition requires g-factor and conscientious effort over a period.

zoom and enhance

g-factor

[–] blakestacey@awful.systems 6 points 4 hours ago (3 children)

Yud continues to bluecheck:

"This is not good news about which sort of humans ChatGPT can eat," mused Yudkowsky. "Yes yes, I'm sure the guy was atypically susceptible for a $2 billion fund manager," he continued. "It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them."

Is this "narrative" in the room with us right now?

It's reassuring to know that times change, but Yud will always be impressed by the virtues of the rich.

[–] blakestacey@awful.systems 10 points 1 day ago (1 children)

to placate trans ideology

[–] blakestacey@awful.systems 8 points 2 days ago (2 children)

Here's their page of instructions, written as usual by the children who really liked programming the family VCR:

https://en.wikipedia.org/wiki/Wikipedia:Database_download

[–] blakestacey@awful.systems 12 points 2 days ago* (last edited 2 days ago)

Let's see who he reads. Vox Day (who is now using ChatGPT to "disprove" evolution), Christopher Rufo, Curtis Yarvin, Emil Kirkegaard, Mars Review model Bimbo Ubermensch.... It's a real Who's Who of Why The Fuck Do I Know Who These People Are?!

[–] blakestacey@awful.systems 14 points 2 days ago (5 children)

Want to feel depressed? Over 2,000 Wikipedia articles, on topics from Morocco to Natalie Portman to Sinn Féin, are corrupted by ChatGPT. And that's just the obvious ones.

https://en.wikipedia.org/w/index.php?search=insource%3A%22utm_source%3Dchatgpt.com%22&title=Special%3ASearch&profile=advanced&fulltext=1&ns0=1&searchToken=8ops8b9qb8qmw8by39k248jyp

[–] blakestacey@awful.systems 14 points 3 days ago (6 children)

https://xcancel.com/jasonlk/status/1946069562723897802

Vibe Coding Day 8,

I'm not even out of bed yet and I'm already planning my day on @Replit.

Today is AI Day, to really add AI to our algo.

[...]

If @Replit deleted my database between my last session and now there will be hell to pay

[–] blakestacey@awful.systems 10 points 4 days ago (6 children)

Seems overly generous both to Christopher Hitchens and to Julia Galef.

[–] blakestacey@awful.systems 17 points 4 days ago

(putting on an N95 before I enter the grocery store) dun dun DUN DUN dun dun DUN DUN deedle dee deedle dee DUN DUN

[–] blakestacey@awful.systems 16 points 4 days ago

The more expertise you have, the more you can use ChatGPT as an idea collaborator, and use your own discernment on the validity of the ideas.

Good grief. Just take drugs, people.

[–] blakestacey@awful.systems 31 points 4 days ago (1 children)

Don't worry; this post is not going to be cynical or demeaning to you or your AI companion.

If you're worried that your "AI companion" can be demeaned by pointing out the basic truth about it, then you deserve to be demeaned yourself.

 

Mother Jones has a new report about Jordan Lasker:

A Reddit account named Faliceer, which posted highly specific biographical details that overlapped with Lasker’s offline life and which a childhood friend of Lasker’s believes he was behind, wrote in 2016, “I actually am a Jewish White Supremacist Nazi.” The Reddit comment, which has not been previously reported, is one of thousands of now-deleted posts from the Faliceer account obtained by Mother Jones in February. In other posts written between 2014 and 2016, Faliceer endorses Nazism, eugenics, and racism. He wishes happy birthday to Adolf Hitler, says that “I support eugenics,” and uses a racial slur when saying those who are attracted to Black people should kill themselves.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

"TheFutureIsDesigned" bluechecks thusly:

You: takes 2 hours to read 1 book

Me: take 2 minutes to think of precisely the information I need, write a well-structured query, tell my agent AI to distribute it to the 17 models I've selected to help me with research, who then traverse approximately 1 million books, extract 17 different versions of the information I'm looking for, which my overseer agent then reviews, eliminates duplicate points, highlights purely conflicting ones for my review, and creates a 3-level summary.

And then I drink coffee for 58 minutes.

We are not the same.

For bonus points:

I want to live in the world of Hyperion, Ringworld, Foundation, and Dune.

You know, Dune.

(Via)

 

Everybody loves Wikipedia, the surprisingly serious encyclopedia and the last gasp of Old Internet idealism!

(90 seconds later)

We regret to inform you that people write credulous shit about "AI" on Wikipedia as if that is morally OK.

Both of these are somewhat less bad than they were when I first noticed them, but they're still pretty bad. I am puzzled at how the latter even exists. I had thought that there were rules against just making a whole page about a neologism, but either I'm wrong about that or the "rules" aren't enforced very strongly.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

In the week since a Chinese AI model called DeepSeek became a household name, a dizzying number of narratives have gained steam, with varying degrees of accuracy [...] perhaps most notably, that DeepSeek’s new, more efficient approach means AI might not need to guzzle the massive amounts of energy that it currently does.

The latter notion is misleading, and new numbers shared with MIT Technology Review help show why. These early figures—based on the performance of one of DeepSeek’s smaller models on a small number of prompts—suggest it could be more energy intensive when generating responses than the equivalent-size model from Meta. The issue might be that the energy it saves in training is offset by its more intensive techniques for answering questions, and by the long answers they produce.

Add the fact that other tech firms, inspired by DeepSeek’s approach, may now start building their own similar low-cost reasoning models, and the outlook for energy consumption is already looking a lot less rosy.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

 

Kate Knibbs reports in Wired magazine:

Against the company’s wishes, a court unredacted information alleging that Meta used Library Genesis (LibGen), a notorious so-called shadow library of pirated books that originated in Russia, to help train its generative AI language models. [...] In his order, Chhabria referenced an internal quote from a Meta employee, included in the documents, in which they speculated, “If there is media coverage suggesting we have used a dataset we know to be pirated, such as LibGen, this may undermine our negotiating position with regulators on these issues.” [...] These newly unredacted documents reveal exchanges between Meta employees unearthed in the discovery process, like a Meta engineer telling a colleague that they hesitated to access LibGen data because “torrenting from a [Meta-owned] corporate laptop doesn’t feel right 😃”. They also allege that internal discussions about using LibGen data were escalated to Meta CEO Mark Zuckerberg (referred to as "MZ" in the memo handed over during discovery) and that Meta's AI team was "approved to use" the pirated material.

 

Retraction Watch reports:

All but one member of the editorial board of the Journal of Human Evolution (JHE), an Elsevier title, have resigned, saying the “sustained actions of Elsevier are fundamentally incompatible with the ethos of the journal and preclude maintaining the quality and integrity fundamental to JHE’s success.”

The resignation statement reads in part,

In fall of 2023, for example, without consulting or informing the editors, Elsevier initiated the use of AI during production, creating article proofs devoid of capitalization of all proper nouns (e.g., formally recognized epochs, site names, countries, cities, genera, etc.) as well italics for genera and species. These AI changes reversed the accepted versions of papers that had already been properly formatted by the handling editors.

(Via Pharyngula.)

Related:

view more: next ›