fullsquare

joined 3 months ago
[–] fullsquare@awful.systems 4 points 2 weeks ago* (last edited 2 weeks ago)

maybe there's just enough text written in that psychopatic techbro style with similar disregard for normal ethics that llms latched onto that. this is like what i guess happened with that "explain step by step" trick - instead of grafting from pairs of answers and questions like on quora, lying box grafts from sets of question -> steps -> answer like on chegg or stack or somewhere else where you can expect answers will be more correct

it'd be more of case of getting awful output from awful input

[–] fullsquare@awful.systems 5 points 2 weeks ago* (last edited 2 weeks ago)

nah, what happened is that they were non-psychotic before contact with chatbot and weren't even usually considered at risk. chatbot trained on entire internet will also ingest all schizo content, the timecubes and dr bronner shampoo labels of the world. learned to respond in the same style, when a human starts talking conspirational nonsense it'll throw more in while being useless sycophant all the way. some people trust these lying idiot boxes; net result is somebody caught in seamless infobubble containing only one person and increasing amounts of spiritualist, conspirational or whatever the person prefers content. this sounds awfully like qanon made for audience of one, and by now it's known that the original was able to maul seemingly normal people pretty badly, except this time they can get there almost by an accident, getting hooked into qanon accidentally would be much harder.

[–] fullsquare@awful.systems 3 points 2 weeks ago* (last edited 2 weeks ago)

No. Barrels of API (active pharmaceutical ingredient) are mostly hauled from India or China, then formulated into pills or whatever, this goes especially for generic medicines. There is some american manufacture of APIs, but these tend to be on more expensive side (biologicals or small molecule drugs under patent). Inputs for these APIs also tend to be made in India or China

[–] fullsquare@awful.systems 7 points 2 weeks ago

*musk voice* if machine god didn't want me to fuck with the racism dial, he wouldn't make it

[–] fullsquare@awful.systems 6 points 2 weeks ago

i meant more like scamming true believers out of their money like happens with crypto, this is cfar deal currently. spam, as something nobody should or wants to spend their creative juices on, or for that matter interact in any way, seems a natural fit for automation with llms

[–] fullsquare@awful.systems 13 points 2 weeks ago

“A lot of the early Rationalists thought it was important to tolerate weird people, a lot of weird people encountered that tolerance and decided they’d found their new home,” he wrote in a message to me, “and some of those weird people turned out to be genuinely crazy and in a contagious way among the susceptible.”

[–] fullsquare@awful.systems 4 points 2 weeks ago

The only question is who will get the blame.

what does chatbot say about that?

[–] fullsquare@awful.systems -1 points 2 weeks ago (1 children)

japanese have 100v and don't have this problem

[–] fullsquare@awful.systems 7 points 2 weeks ago

it's sorta impressive that they're treating their hardware worse than cryptobros then

[–] fullsquare@awful.systems 3 points 2 weeks ago (3 children)

isn't openai silicon breaking all the time because it's so overheated? so they have to replace it a lot of the time? maybe it's only good for some months

[–] fullsquare@awful.systems 9 points 2 weeks ago (2 children)

nah they'll just stop and do nothing. they won't be able to do anything without chatgpt telling them what to do and think

i think that deflation of this bubble will be much slower and a bit anticlimatic. maybe they'll figure a way to squeeze suckers out of their money in order to keep the charade going

[–] fullsquare@awful.systems 9 points 2 weeks ago* (last edited 2 weeks ago)

the most subtle taliban infiltrator on lesswrong:

e:

You don't need empirical evidence to reason from first principles

he'll fit in just fine

view more: ‹ prev next ›