I think there's a really important distinction between "getting the same result" when that outcome is guaranteed and when it isn't. Using a brick instead of a hammer to squash something will get you the same result every time. But with an LLM there's no guarantee you're going to get any specific outcome - will it hallucinate this time or not? - and so even if it gives you what you wanted this time, you have to account for the probability that existed it would not have done (and that you wouldn't have known it let you down)
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
How is it that pig escaped from the pen and doesn't want to eat slop? :333
i am glad she's still hitting home runs. i have listened to her for years but when she left Skeptic's Guide, i lost track. only catching her when someone else suggests it... mostly because I had to stop consuming so much logically well thought opinions, that made me realize how horrible people are..... I still have lots of respect for RW, she's got a lot of balance to her train of though
No one knows the exact effects yet, because obviously there aren't any long term large scale studies. As much as I dislike AI some of these claims are very misleading, though they might turn out to be true. This is all speculation. We simply don't know yet.
But I recommend that you be careful so as not to drown in an avalanche later.
I 100℅ agree. I just don't like being misleading, even if the statement lines up with my own views.
I understand you here, there's nothing to say. :3