fullsquare

joined 3 months ago
[–] fullsquare@awful.systems 1 points 1 week ago

about #1, not only this makes number of potential leakers higher (intentional or not - by opsec failures) but also this narrows down number of loyal, reliable people who also won't fuck up the job real fast

[–] fullsquare@awful.systems 5 points 1 week ago* (last edited 1 week ago)

aliexpress did that since forever but you can just set display language once and you're done. these ai-dubs are probably worst so far but can be turned off by uploader (it's opt-out) (for now)

[–] fullsquare@awful.systems 14 points 1 week ago (1 children)

but that's not disruptive and works and makes altman zero money

[–] fullsquare@awful.systems 9 points 1 week ago (3 children)

I think it's implied by that bozo that bowling place also runs a chatbot of their own

[–] fullsquare@awful.systems 3 points 1 week ago

why doesn't anthropic, the bigger startup, simply eat anysphere?

[–] fullsquare@awful.systems 13 points 1 week ago

the ml in lemmy.ml stands for marxism-leninism

[–] fullsquare@awful.systems 3 points 1 week ago

whyyyyy it's a real site

[–] fullsquare@awful.systems 25 points 1 week ago

there shouldn't be billion dollar startups

[–] fullsquare@awful.systems 2 points 1 week ago (1 children)

if someone is so bad at a subject that chatgpt offers actual help, then maybe that person shouldn't write an article on that subject in the first place. the only language chatgpt speaks is bland nonconfrontational corporate sludge, i'm not sure how it helps

[–] fullsquare@awful.systems 4 points 1 week ago (3 children)

in one of these preprints there were traces of prompt used for writing paper itself too

[–] fullsquare@awful.systems 7 points 2 weeks ago (5 children)

maybe it's to get through llm pre-screening and allow the paper to be seen by human eyeballs

[–] fullsquare@awful.systems 4 points 2 weeks ago* (last edited 2 weeks ago)

maybe there's just enough text written in that psychopatic techbro style with similar disregard for normal ethics that llms latched onto that. this is like what i guess happened with that "explain step by step" trick - instead of grafting from pairs of answers and questions like on quora, lying box grafts from sets of question -> steps -> answer like on chegg or stack or somewhere else where you can expect answers will be more correct

it'd be more of case of getting awful output from awful input

view more: ‹ prev next ›