TinyTimmyTokyo

joined 2 years ago
[–] TinyTimmyTokyo@awful.systems 21 points 1 day ago (5 children)

The Lasker/Mamdani/NYT sham of a story just gets worse and worse. It turns out that the ultimate source of Cremieux's (Jordan Lasker's) hacked Columbia University data is a hardcore racist hacker who uses a slur for their name on X. The NYT reporter who wrote the Mamdani piece, Benjamin Ryan, turns out to have been a follower of this hacker's X account. Ryan essentially used Lasker as a cutout for the blatantly racist hacker.

https://archive.is/d9rh1

[–] TinyTimmyTokyo@awful.systems 8 points 2 days ago (4 children)

It's starting to feel like I need to download a snapshot of Wikipedia now before it gets worse.

[–] TinyTimmyTokyo@awful.systems 9 points 2 days ago (2 children)

I'd be lying if I said the randomly generated narrative the LLM is stringing together isn't hilarious.

"I panicked and ran database commands without permission."

"I destroyed all production data."

"You immediately said 'No', ''Stop', 'You didn't even ask.'"

"But it was already too late."

[–] TinyTimmyTokyo@awful.systems 17 points 3 days ago (5 children)

The rest of that guy's blog is a fucking neofascist mess. That'll teach me to post a link without first checking out the writer.

[–] TinyTimmyTokyo@awful.systems 9 points 3 days ago (7 children)

Reading the comments led me to this entertaining sneer about our friends.

[–] TinyTimmyTokyo@awful.systems 11 points 4 days ago (10 children)

Daniel Koko's trying to figure out how to stop the AGI apocalypse.

How might this work? Install TTRPG afficionados at the chip fabs and tell them to roll a saving throw.

Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don't roll a 1 are destroyed on the spot.

And if that doesn't work? Koko ultimately ends up pretty much where Big Yud did: bombing the fuck out of the fabs and the data centers.

"For example, if a country turns out to have a hidden datacenter somewhere, the datacenter gets hit by ballistic missiles and the country gets heavy sanctions and demands to allow inspectors to pore over other suspicious locations, which if refused will lead to more missile strikes."

[–] TinyTimmyTokyo@awful.systems 16 points 4 days ago (1 children)

It's not that weird when you understand the sharks he swims with. Race pseudoscientists routinely peddle the idea that Ashkenazi Jews have higher IQs than any other ethnic or racial group. Scoot Alexander and Big Yud have made this claim numerous times. Lasker pretending to be a Jew makes more sense once you realize this.

[–] TinyTimmyTokyo@awful.systems 12 points 5 days ago (5 children)

You thought Crémieux (Jordan Lasker) was bad. You were wrong. He's even worse. https://www.motherjones.com/politics/2025/07/cremieux-jordan-lasker-mamdani-nyt-nazi-faliceer-reddit/

[–] TinyTimmyTokyo@awful.systems 11 points 6 days ago (10 children)

Oof, that Hollywood guest (Brian Koppelman) is a dunderhead. "These AI layoffs actually make sense because of complexity theory". "You gotta take Eliezer Yudkowsky seriously. He predicted everything perfectly."

I looked up his background, and it turns out he's the guy behind the TV show "Billions". That immediately made him make sense to me. The show attempts to lionize billionaires and is ultimately undermined not just by its offensive premise but by the world's most block-headed and cringe-inducing dialog.

Terrible choice of guest, Ed.

[–] TinyTimmyTokyo@awful.systems 15 points 1 week ago* (last edited 1 week ago) (6 children)

Sex pest billionaire Travis Kalanick says AI is great for more than just vibe coding. It's also great for vibe physics.

[–] TinyTimmyTokyo@awful.systems 12 points 1 week ago* (last edited 1 week ago)

When you look at METR's web site and review the credentials of its staff, you find that almost none of them has any sort of academic research background. No doctorates as far as I can tell, and lots of rationalist junk affiliations.

[–] TinyTimmyTokyo@awful.systems 13 points 1 week ago (3 children)

I like his new framing of the accelerationists and transhumanists as pro-extinctionists.

 

"Ban women from universities, higher education and most white-collar jobs."

"Allow people to privately borrow against the taxable part of the future incomes or other economic activities of their children."

So many execrable takes in one tweet, and that's only two of them. I'm tempted to think he's cynically outrage-farming, but then I remember who he is.

 

Nate Soares and Big Yud have a book coming out. It's called "If Anyone Builds It, Everyone Dies". From the names of the authors and the title of the book, you already know everything you need to know about its contents without having to read it. (In fact, given the signature prolixity of the rationalists, you can be sure that it says in 50,000 words what could just as easily have been said in 20.)

In this LessWrong post, Nate identifies the real reason the rationalists have been unsuccessful at convincing people in power to take the idea of existential risk seriously. The rationalists simply don't speak with enough conviction. They hide the strength of their beliefs. They aren't bold enough.

As if rationalists have ever been shy about stating their kooky beliefs.

But more importantly, buy his book. Buy so many copies of the book that it shows up on all the best-seller lists. Buy so many copies that he gets invited to speak on fancy talk shows that will sell even more books. Basically, make him famous. Make him rich. Make him a household name. Only then can we make sure that the AI god doesn't kill us all.

Nice racket.

 

The tech bro hive mind on HN is furiously flagging (i.e., voting into invisibility) any submissions dealing with Tesla, Elon Musk or the kafkaesque US immigration detention situation. Add "/active" to the URL to see.

The site's moderator says it's fine because users are "tired of the repetition". Repetition of what exactly? Attempts to get through the censorship wall?

 

Sneerclubbers may recall a recent encounter with "Tracing Woodgrains", née Jack Despain Zhou, the rationalist-infatuated former producer and researcher for "Blocked and Reported", a podcast featuring prominent transphobes Jesse Singal and Katie Herzog.

It turns out he's started a new venture: a "think-tank" called the "Center for Educational Progress." What's this think-tank's focus? Introducing eugenics into educational policy. Of couse they don't put it in those exact words, but that's the goal. The co-founder of the venture is Lillian Tara, former executive director of Pronatalist.org, the outfit run by creepy Harry Potter look-a-likes (and moderately frequent topic in this forum) Simone and Malcolm Collins. According to the anti-racist activist group Hope Not Hate:

The Collinses enlisted Lillian Tara, a pronatalist graduate student at Harvard University. During a call with our undercover reporter, Tara referred three times to her work with the Collinses as eugenics. “I don’t care if you call me a eugenicist,” she said.

Naturally, the CEP is concerned about IQ and want to ensure that mentally superior (read white) individuals don't have their hereditarily-deserved resources unfairly allocated to the poors and the stupids. They have a reading list on the substack, which includes people like Arthur Jensen and LessWrong IQ-fetishist Gwern.

So why are Trace and Lillian doing this now? I suppose they're striking while the iron is hot, probably hoping to get some sweet sweet Thiel-bucks as Elon and his goon-squad do their very best to gut public education.

And more proof for the aphorism: "Scratch a rationalist, find a racist".

 

In a recent Hard Fork (Hard Hork?) episode, Casey Newton and Kevin Roose described attending the recent "The Curve" conference -- a conference in Berkeley organized and attended mostly by our very best friends. When asked about the most memorable session he attended at this conference, Casey said:

That would have been a session called If Anyone Builds It, Everyone Dies, which was hosted by Eliezer Yudkowski. Eliezer is sort of the original doomer. For a couple of decades now, he has been warning about the prospects of super intelligent AI.

His view is that there is almost no scenario in which we could build a super intelligence that wouldn't either enslave us or hurt us, kill all of us, right? So he's been telling people from the beginning, we should probably just not build this. And so you and I had a chance to sit in with him.

People fired a bunch of questions at him. And we should say, he's a really polarizing figure, and I think is sort of on one extreme of this debate. But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.

And so it was fascinating to spend an hour or so sitting in a room and hearing him make his case.

[...]

Yeah, my case for taking these folks seriously, Kevin, is that this is a community that, over a decade ago, started to make a lot of predictions that just basically came true, right? They started to look at advancements in machine learning and neural networks and started to connect the dots. And they said, hey, before too long, we're going to get into a world where these models are incredibly powerful.

And all that stuff just turned out to be true. So, that's why they have credibility with me, right? Everything they believe, you know, we could hit some sort of limit that they didn't see coming.

Their model of the world could sort of fall apart. But as they have updated it bit by bit, and as these companies have made further advancements and they've built new products, I would say that this model of the world has basically held so far. And so, if nothing else, I think we have to keep this group of folks in mind as we think about, well, what is the next phase of AI going to look like for all of us?

 

Excerpt:

A new study published on Thursday in The American Journal of Psychiatry suggests that dosage may play a role. It found that among people who took high doses of prescription amphetamines such as Vyvanse and Adderall, there was a fivefold increased risk of developing psychosis or mania for the first time compared with those who weren’t taking stimulants.

Perhaps this explains some of what goes on at LessWrong and in other rationalist circles.

 

Maybe she was there to give Moldbug some relationship advice.

 

The New Yorker has a piece on the Bay Area AI doomer and e/acc scenes.

Excerpts:

[Katja] Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent. Raised in Chicago as an Orthodox Jew, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two-thousands, made his way to the Bay Area. His best-known works include “Harry Potter and the Methods of Rationality,” a piece of fan fiction running to more than six hundred thousand words, and “The Sequences,” a gargantuan series of essays about how to sharpen one’s thinking.

[...]

A guest brought up Scott Alexander, one of the scene’s microcelebrities, who is often invoked mononymically. “I assume you read Scott’s post yesterday?” the guest asked [Katja] Grace, referring to an essay about “major AI safety advances,” among other things. “He was truly in top form.”

Grace looked sheepish. “Scott and I are dating,” she said—intermittently, nonexclusively—“but that doesn’t mean I always remember to read his stuff.”

[...]

“The same people cycle between selling AGI utopia and doom,” Timnit Gebru, a former Google computer scientist and now a critic of the industry, told me. “They are all endowed and funded by the tech billionaires who build all the systems we’re supposed to be worried about making us extinct.”

 

In her sentencing submission to the judge in the FTX trial, Barbara Fried argues that her son is just a misunderstood altruist, who doesn't deserve to go to prison for very long.

Excerpt:

One day, when he was about twelve, he popped out of his room to ask me a question about an argument made by Derik Parfit, a well-known moral philosopher. As it happens, | am quite familiar with the academic literature Parfi’s article is a part of, having written extensively on related questions myself. His question revealed a depth of understanding and critical thinking that is not all that common even among people who think about these issues for a living. ‘What on earth are you reading?” I asked. The answer, it turned out, was he was working his way through the vast literature on utiitarianism, a strain of moral philosophy that argues that each of us has a strong ethical obligation to live so as to alleviate the suffering of those less fortunate than ourselves. The premises of utilitarianism obviously resonated strongly with what Sam had already come to believe on his own, but gave him a more systematic way to think about the problem and connected him to an online community of like-minded people deeply engaged in the same intellectual and moral journey.

Yeah, that "online community" we all know and love.

 

Pass the popcorn, please.

(nitter link)

 

They've been pumping this bio-hacking startup on the Orange Site (TM) for the past few months. Now they've got Siskind shilling for them.

 

Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she's shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that's an excellent development. Molly's great.

view more: next ›