this post was submitted on 19 Jun 2025
111 points (100.0% liked)
TechTakes
1977 readers
190 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
First of all, sorry if my comment sounded like I'm dismissing the position of other people commenting before me. I tried exploring the other side of the argument and am genuinely open to any outcome here.
Hm yes, maybe not malicious, but the quoted portion from the iNat forum sounded very much like the person commenting described the iNat stuff as untrustworthy.
I'm probably not knowledgeable enough to really have an opinion. I'd have thought that there are some use cases where generative AI can be helpful. But you have a point in that iNat actually relies on correct and trustworthy results.
yeah, I get that and I'm not in favor of it either. But it's probably also a cost-benefit calculation for iNat to get a grant from Google and having to work on some sort of generative AI.
Sorry, I'm out of the loop. What are you referring to?
OK fair, maybe that was a bit much, sorry. I think it is a huge step to delete your account and leave a community just based on the mention of generative AI and I have a hard time getting into the head space. Like, sure, if you invested little time in the site. But I've put thousands of hours in iNat and would certainly need a strong incentive to delete my account...
no worries -- i am in the unfortunate position of very often needing to assume the worst in others and maybe my reading of you was harsher than it should have been, and for that i am sorry. but...
"generative AI" is a bit of a marketing buzzword. the specific technology in play here is LLMs, and they should be forcefully kept out of every online system, especially ones people rely on for information.
LLMs are inherently unfit for every purpose. they might be "useful", in the sense that a rock is useful for driving a nail through a board, but they are not tools in the same way hammers are. the only exception to this is when you need a lot of text in a hurry and don't care about the quality or accuracy of the text -- in other words, spams and scams. in those specific domains i can admit LLMs are the most applicable tool for the job.
so when ostensibly-smart people, but especially ones who are running public information systems, propose using LLMs for things they are unable to do, such as explain species identification procedures, it means either 1) they've been suckered into believing they're capable of doing those things, or 2) they're being paid to propose those things. sometimes it is a mix of both. either way, it very much indicates those people should not be trusted.
furthermore, the technology industry as a whole has already spent several billion dollars trying to push this technology onto and into every part of our daily lives. LLM-infested slop has made its way onto every online platform, and more often than not, with direct backing from those platforms. and the technology industry is openly hostile to the idea of "consent", actively trying to undermine it at every turn. it's even made it all the way through to the statement attempting to reassure on that forum post about the mystery demo LLMs -- note the use of the phrase "making it opt-out". why not "opt-in"? why not "with consent"?
it's no wonder that people are leaving -- the writing is more or less on the wall.