this post was submitted on 01 Apr 2025
184 points (90.0% liked)

Technology

68244 readers
4237 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

When I started angel investing in the late 1990s, a tech investment included a significant technology risk, with the potential upside being groundbreaking innovation. Being an investor at this time meant taking a considerable technology risk and betting on actual tech, such as nanotech, semiconductors or biotech.

E-commerce, albeit hyped and interesting, was not considered tech. It was “Business 2.0”, plain and straightforward, hype included.

you are viewing a single comment's thread
view the rest of the comments
[–] JayleneSlide@lemmy.world 1 points 21 hours ago (1 children)

And an additional response, because I didn't fully answer your question. LLMs don't reason. They traverse a data structure based on weightings relative to the occurrence frequency in their training content. Loosely speaking, it's a graph (https://en.wikipedia.org/wiki/Graph_(abstract_data_type)). It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can't reason through a problem that it hasn't previously seen unlike, say, a squirrel.

[–] KingRandomGuy@lemmy.world 1 points 1 hour ago

It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can’t reason through a problem that it hasn’t previously seen

This also isn't an accurate characterization IMO. LLMs and ML algorithms in general can generalize to unseen problems, even if they aren't perfect at this; for instance, you'll find that LLMs can produce commands to control robot locomotion, even on different robot types.

"Reasoning" here is based on chains of thought, where they generate intermediate steps which then helps them produce more accurate results. You can fairly argue that this isn't reasoning, but it's not like it's traversing a fixed knowledge graph or something.