blakestacey

joined 2 years ago
MODERATOR OF
[–] blakestacey@awful.systems 26 points 3 months ago (5 children)

Reflection (artificial intelligence) is dreck of a high order. It cites one arXiv post after another, along with marketing materials directly from OpenAI and Google themselves... How do the people who write this shit dress themselves in the morning without pissing into their own socks?

[–] blakestacey@awful.systems 11 points 3 months ago (3 children)

The "trivial" procedure for suggesting that an article be deleted was evidently written by the kids who liked programming their parents' VCR.

[–] blakestacey@awful.systems 24 points 3 months ago (6 children)

Counterpoint: I get to complain about whatever I want.

I could write a lengthy comment about how a website that is nominally editable by "anyone" is in practice a walled garden of acronym-spouting rules lawyers who will crush dissent by a thousand duck nibbles. I could elaborate upon that observation with an analogy to Masto reply guys and FOSS culture at large.

Or I could ban you for fun. I haven't decided yet. I'm kind of giddy from eating a plate of vegan nacho fries and a box of Junior Mints.

[–] blakestacey@awful.systems 11 points 3 months ago (15 children)

Nothing says "these people needed more shoving into lockers" than HPMoR 10th anniversary parties.

[–] blakestacey@awful.systems 19 points 3 months ago* (last edited 3 months ago) (2 children)

"Vibe coding? Back in my day, we called it teledildonics."

[–] blakestacey@awful.systems 17 points 3 months ago

Please acquaint yourself with the definition of the word latter on your way to the egress.

[–] blakestacey@awful.systems 8 points 3 months ago (1 children)

YesNoError is planning to let holders of its cryptocurrency dictate which papers get scrutinized first.

Reminds me of the Decentraland guy who touted the ability to fire news reporters you don't like as a benefit.

[–] blakestacey@awful.systems 10 points 3 months ago (1 children)

Do we have any experts on Wikipedian article-deletion practices around here? Because that looks really thinly sourced.

[–] blakestacey@awful.systems 6 points 3 months ago (3 children)

another day volunteering at the octopus museum. everyone keeps asking me if they can fuck the octopus. buddy,

[–] blakestacey@awful.systems 9 points 3 months ago (1 children)

That's what we call a win-win scenario

[–] blakestacey@awful.systems 16 points 3 months ago* (last edited 3 months ago)

It goes without saying that the AI-risk and rationalist communities are not morally responsible for the Zizians any more than any movement is accountable for a deranged fringe.

When the mainstream of the movement is ve zhould chust bomb all datacenters, maaaaaybe they are?

[–] blakestacey@awful.systems 16 points 3 months ago* (last edited 3 months ago) (1 children)

Yudkowsky was trying to teach people how to think better – by guarding against their cognitive biases, being rigorous in their assumptions and being willing to change their thinking.

No he wasn't.

In 2010 he started publishing Harry Potter and the Methods of Rationality, a 662,000-word fan fiction that turned the original books on their head. In it, instead of a childhood as a miserable orphan, Harry was raised by an Oxford professor of biochemistry and knows science as well as magic

No, Hariezer Yudotter does not know science. He regurgitates the partial understanding and the outright misconceptions of his creator, who has read books but never had to pass an exam.

Her personal philosophy also draws heavily on a branch of thought called “decision theory”, which forms the intellectual spine of Miri’s research on AI risk.

This presumes that MIRI's "research on AI risk" actually exists, i.e., that their pitiful output can be called "research" in a meaningful sense.

“Ziz didn’t do the things she did because of decision theory,” a prominent rationalist told me. She used it “as a prop and a pretext, to justify a bunch of extreme conclusions she was reaching for regardless”.

"Excuse me, Pot? Kettle is on line two."

view more: ‹ prev next ›