gerikson

joined 2 years ago
[–] gerikson@awful.systems 9 points 5 hours ago* (last edited 5 hours ago)

So here's a poster on LessWrong, ostensibly the space to discuss how to prevent people from dying of stuff like disease and starvation, "running the numbers" on a Lancet analysis of the USAID shutdown and, having not been able to replicate its claims of millions of dead thereof, basically concludes it's not so bad?

https://www.lesswrong.com/posts/qgSEbLfZpH2Yvrdzm/i-tried-reproducing-that-lancet-study-about-usaid-cuts-so

No mention of the performative cruelty of the shutdown, the paltry sums involved compared to other gov expenditures, nor the blow it deals to American soft power. But hey, building Patriot missiles and then not sending them to Ukraine is probably net positive for human suffering, just run the numbers the right way!

Edit ah it's the dude who tried to prove that most Catholic cardinals are gay because heredity, I think I highlighted that post previously here. Definitely a high-sneer vein to mine.

[–] gerikson@awful.systems 6 points 1 day ago

No replies and somehow that screen name just screams "troll" to me.

Not that I really care, git can go DIAF as far as I'm concerned.

[–] gerikson@awful.systems 5 points 2 days ago

janitorai - which seems to be a hosting site for creepy AI chats - is blocking all UK visitors due to the OSA

https://blog.janitorai.com/posts/3/

I'm torn here, the OSA seems to me to be massive overreach but perhaps shielding limeys from AI is wroth it

[–] gerikson@awful.systems 8 points 3 days ago* (last edited 3 days ago) (2 children)

Guys, how about we made the coming computer god a fan of Robert Nozick, what could go wrong?

https://www.lesswrong.com/posts/us8ss79mWCgTcSKoK/a-night-watchman-asi-as-a-first-step-toward-a-great-future

[–] gerikson@awful.systems 5 points 3 days ago

Vegemite?

I'll get my coat.

[–] gerikson@awful.systems 17 points 6 days ago* (last edited 5 days ago) (11 children)

Here's an example of normal people using Bayes correctly (rationally assigning probabilities and acting on them) while rats Just Don't Get Why Normies Don't Freak Out:

For quite a while, I've been quite confused why (sweet nonexistent God, whyyyyy) so many people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being… at best lukewarm against the idea of humanity going extinct.

(Dude then goes on to try to game-theorize this, I didn't bother to poke holes in it)

The thing is, genocides have happened, and people around the world are perfectly happy to advocate for it in diverse situations. Probability wise, the risk of genocide somewhere is very close to 1, while the risk of "omnicide" is much closer to zero. If you want to advocate for eliminating something, working to eliminating the risk of genocide is much more rational than working to eliminate the risk of everyone dying.

At least on commenter gets it:

Most people distinguish between intentional acts and shit that happens.

(source)

Edit never read the comments (again). The commenter referenced above obviously didn't feel like a pithy one liner adhered to the LW ethos, and instead added an addendum wondering why people were more upset about police brutality killing people than traffic fatalities. Nice "save", dipshit.

[–] gerikson@awful.systems 13 points 1 week ago

yeah but have you considered how much it's worth that gramma can vibecode a todo app in seconds now???

[–] gerikson@awful.systems 5 points 1 week ago* (last edited 1 week ago)

Haven't really kept up with the pseudo-news of VC funded companies acquiring each other, but it seems Windsurf (previously been courted by OpenAI) is now gonna be purchased by the bros behind Devin.

[–] gerikson@awful.systems 9 points 1 week ago (1 children)

I found out about that too when I arrived at Reddit and it was translated to Swedish automatically.

[–] gerikson@awful.systems 3 points 1 week ago* (last edited 1 week ago) (1 children)

This isn't an original thought, but a better matrix for comparing the ideology (such as it is) of the current USG is not Nazi Germany but pre-war US right wing obsessions - anti-FDR and anti-New Deal.

This appears in weird ways, like this throwaway comment regarding the Niihau incident, where two ethnic Japanese inhabitants of Niihau helped a downed Japanese airman immediately after Pearl Harbor.

Imagine if you will, one of the 9/11 hijackers parachuting from the plane before it crashed, asking a random muslim for help, then having that muslim be willing to immediately get himself into a shootouts, commit arson, kidnappings, and misc mayhem.

Then imagine that it was covered in a media environment where the executive branch had been advocating for war for over a decade, and voices which spoke against it were systematically silenced.

(src)

Dude also credits LessOnline with saving his life due to unidentified <<>> shooting up his 'hood when he was there. Charming.

Edit nah he's a neo-Nazi (or at least very concerned about the fate of German PoWs after WW2):

https://www.lesswrong.com/posts/6BBRtduhH3q4kpmAD/against-that-one-rationalist-mashal-about-japanese-fifth?commentId=YMRcfJvcPWbGwRfkJ

[–] gerikson@awful.systems 11 points 1 week ago (5 children)

LW:

Please consider minimizing direct use of AI chatbots (and other text-based AI) in the near-term future, if you can. The reason is very simple: your sanity may be at stake.

Perfect. No notes.

 

current difficulties

  1. Day 21 - Keypad Conundrum: 01h01m23s
  2. Day 17 - Chronospatial Computer: 44m39s
  3. Day 15 - Warehouse Woes: 30m00s
  4. Day 12 - Garden Groups: 17m42s
  5. Day 20 - Race Condition: 15m58s
  6. Day 14 - Restroom Redoubt: 15m48s
  7. Day 09 - Disk Fragmenter: 14m05s
  8. Day 16 - Reindeer Maze: 13m47s
  9. Day 22 - Monkey Market: 12m15s
  10. Day 13 - Claw Contraption: 11m04s
  11. Day 06 - Guard Gallivant: 08m53s
  12. Day 08 - Resonant Collinearity: 07m12s
  13. Day 11 - Plutonian Pebbles: 06m24s
  14. Day 18 - RAM Run: 05m55s
  15. Day 04 - Ceres Search: 05m41s
  16. Day 23 - LAN Party: 05m07s
  17. Day 02 - Red Nosed Reports: 04m42s
  18. Day 10 - Hoof It: 04m14s
  19. Day 07 - Bridge Repair: 03m47s
  20. Day 05 - Print Queue: 03m43s
  21. Day 03 - Mull It Over: 03m22s
  22. Day 19 - Linen Layout: 03m16s
  23. Day 01 - Historian Hysteria: 02m31s
 

Problem difficulty so far (up to day 16)

  1. Day 15 - Warehouse Woes: 30m00s
  2. Day 12 - Garden Groups: 17m42s
  3. Day 14 - Restroom Redoubt: 15m48s
  4. Day 09 - Disk Fragmenter: 14m05s
  5. Day 16 - Reindeer Maze: 13m47s
  6. Day 13 - Claw Contraption: 11m04s
  7. Day 06 - Guard Gallivant: 08m53s
  8. Day 08 - Resonant Collinearity: 07m12s
  9. Day 11 - Plutonian Pebbles: 06m24s
  10. Day 04 - Ceres Search: 05m41s
  11. Day 02 - Red Nosed Reports: 04m42s
  12. Day 10 - Hoof It: 04m14s
  13. Day 07 - Bridge Repair: 03m47s
  14. Day 05 - Print Queue: 03m43s
  15. Day 03 - Mull It Over: 03m22s
  16. Day 01 - Historian Hysteria: 02m31s
 

The previous thread has fallen off the front page, feel free to use this for discussions on current problems

Rules: no spoilers, use the handy dandy spoiler preset to mark discussions as spoilers

 

This season's showrunners are so lazy, just re-using the same old plots and antagonists.

 

“It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same. No character personality comes through,” she said. Generated text also tends to lack a strong sense of place, she’s observed; the settings of the stories are either overly-detailed for popular locations, or too vague, because large language models can’t imagine new worlds and can only draw from existing works that have been scraped into its training data.

 

The grifters in question:

Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers [...]

Edouard's website: https://www.eharr.is/, and on LessWrong: https://www.lesswrong.com/users/edouard-harris

Jeremie's LinkedIn: https://www.linkedin.com/in/jeremieharris/

The company website: https://www.gladstone.ai/

 

HN reacts to a New Yorker piece on the "obscene energy demands of AI" with exactly the same arguments coiners use when confronted with the energy cost of blockchain - the product is valuable in of itself, demands for more energy will spur investment in energy generation, and what about the energy costs of painting oil on canvas, hmmmmmm??????

Maybe it's just my newness antennae needing calibrating, but I do feel the extreme energy requirements for what's arguably just a frivolous toy is gonna cause AI boosters big problems, especially as energy demands ramp up in the US in the warmer months. Expect the narrative to adjust to counter it.

view more: next ›