this post was submitted on 19 Jun 2025
92 points (100.0% liked)

TechTakes

1977 readers
201 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] otter@lemmy.ca 16 points 8 hours ago* (last edited 8 hours ago) (2 children)

I went through some of the links from the article, and there was an update pinned in one of them:

https://forum.inaturalist.org/t/what-is-this-inaturalist-and-generative-ai/66140/431

@procyonloiter and I just had a 3-hour in-person talk with @loarie and I am delighted to say that it has completely alleviated my concerns around this entire issue. I went into it seriously contemplating deleting my entire account and many years of work, and I have come out of it feeling like a massive weight has been lifted.

This whole thing has just been very poor messaging and some serious miscommunication, and DOES NOT indicate any actual shift in how iNat is planning to operate.

  • The lack of communication updates has been because everyone on staff is freaked out and overwhelmed by the amount of backlash, and in a bit of paralysis about how to appropriately respond. There is no nefarious reason for it.
  • The grant from google is indeed a grant, and they are not receiving any data or anything else in exchange for it (I’m sure they’re scraping and stealing stuff anyway but that’s true for anything posted online)
  • The “generative AI” mention in the grant is badly worded corporate buzzspeak, and doesn’t accurately reflect anything that will be used here - disregard any association to what you normally expect from those words
  • The vast majority of the funds will be used to cover normal operating costs of what iNat does every day. A small amount will be going to some specific grant-related projects, which, again, are not actually genAI. There is no guarantee these things will even be implemented on iNat in the end - if they suck, they’ll be tossed out.
  • The staff are very receptive to user concerns, and there will be a chance for people to speak to them and ask specific questions - since it’s a friday and everyone is in different time zones, the details haven’t been fully organized yet - I suggested maybe a drop-in zoom call, or something, where people can join and leave throughout a set time period, so it’s not overwhelmed by a ton of people all competing for talking space at once.

That doesn’t completely cover everything we spoke about, but I’m going to post this now to get it up in the thread- please feel free to ask any questions I might be able to answer!

[–] milicent_bystandr@lemm.ee 29 points 8 hours ago (1 children)

The “generative AI” mention in the grant is badly worded corporate buzzspeak, and doesn’t accurately reflect anything that will be used here - disregard any association to what you normally expect from those words

That sounds particularly suspect, coming with no answer as to what it does mean.

[–] otter@lemmy.ca 8 points 7 hours ago

I agree, and it's also a second hand account from someone who met with them.

However it IS enough for me to hold off on deleting anything until I can hear more. My big concern was this point:

The grant from google is indeed a grant, and they are not receiving any data or anything else in exchange for it (I’m sure they’re scraping and stealing stuff anyway but that’s true for anything posted online)

[–] dgerard@awful.systems 10 points 7 hours ago (1 children)

yeah I suggest you keep reading the thread pointing out how they were explicitly talking about generative AI a year before

[–] acockworkorange@mander.xyz 8 points 5 hours ago* (last edited 5 hours ago)

This is the bit:

seblivia

The “generative AI” mention in the grant is badly worded corporate buzzspeak, and doesn’t accurately reflect anything that will be used here - disregard any association to what you normally expect from those words

In the blog post, they described a specific feature they wanted to develop, and linked to a blog post from last year that said they wanted to use a Vision Language Model, which is essentially an LLM with some visual processing stuff attached. This isn’t badly worded corporate buzzspeak, they very clearly gave an example of what they wanted to do and have had a plan in place for at least a year now that involves using generative AI.

image

Personally, the contradictions between what was said in the blog posts, both a year ago and a few days ago, and what has been said on the forums since then are making it hard to feel like I can trust anything the staff now say about this project. It feels like they’re either wildly backpedalling or have no idea what they’re talking about when it comes to AI, and if it’s the former I’d much prefer for them to just say “we’ve listened to the community’s responses and have decided to pivot towards developing something more like this instead of the original plan to use genAI”.

Maybe I’m just incredibly cynical, but I don’t see how saying you want to use a very specific kind of genAI and showcasing a mockup of the feature you want to implement and have apparently been planning for at least a year could be passed off as just “badly worded corporate buzzspeak”